Revisión De Literatura
Una tarea común al explorar un nuevo tema es realizar una revisión de literatura. En este ejemplo, exploraremos cómo se puede configurar un equipo multiagente para llevar a cabo una revisión de literatura sencilla.
Agente de búsqueda en Arxiv: Utiliza la API de Arxiv para buscar artículos relacionados con un tema específico y devolver los resultados.
Agente de búsqueda en Google: Utiliza la API de búsqueda de Google para encontrar artículos relacionados con un tema específico y devolver los resultados.
Agente de informes: Genera un informe basado en la información recopilada por los agentes de búsqueda en Arxiv y Google.
Primero, importemos los módulos necesarios.
from saptiva_agents import LLAMA_MODEL
from saptiva_agents.agents import AssistantAgent
from saptiva_agents.conditions import TextMentionTermination
from saptiva_agents.teams import RoundRobinGroupChat
from saptiva_agents.ui import Console
from saptiva_agents.tools import FunctionTool
from saptiva_agents.base import SaptivaAIChatCompletionClient
Definición de Herramientas
A continuación, definiremos las herramientas que los agentes utilizarán para realizar sus tareas. En este caso, definiremos una función sencilla search_arxiv
que usará la biblioteca arxiv
para buscar artículos relacionados con un tema específico.
Finalmente, envolveremos las funciones dentro de una clase FunctionTool
, lo que nos permitirá utilizarlas como herramientas dentro de los agentes.
Nota
Será necesario establecer las variables de entorno adecuadas para que las herramientas funcionen correctamente.
También es necesario instalar las bibliotecas requeridas:
!pip install arxiv
def google_search(query: str, num_results: int = 2, max_chars: int = 500) -> list: # type: ignore[type-arg]
import os
import time
import requests
from bs4 import BeautifulSoup
from dotenv import load_dotenv
load_dotenv()
api_key = os.getenv("GOOGLE_API_KEY")
search_engine_id = os.getenv("GOOGLE_SEARCH_ENGINE_ID")
if not api_key or not search_engine_id:
raise ValueError("API key o Search Engine ID no se encontraron en las variables de entorno")
url = "https://www.googleapis.com/customsearch/v1"
params = {"key": api_key, "cx": search_engine_id, "q": query, "num": num_results}
response = requests.get(url, params=params) # type: ignore[arg-type]
if response.status_code != 200:
print(response.json())
raise Exception(f"Error en la solicitud a la API: {response.status_code}")
results = response.json().get("items", [])
def get_page_content(url: str) -> str:
try:
response = requests.get(url, timeout=10)
soup = BeautifulSoup(response.content, "html.parser")
text = soup.get_text(separator=" ", strip=True)
words = text.split()
content = ""
for word in words:
if len(content) + len(word) + 1 > max_chars:
break
content += " " + word
return content.strip()
except Exception as e:
print(f"Error al obtener {url}: {str(e)}")
return ""
enriched_results = []
for item in results:
body = get_page_content(item["link"])
enriched_results.append(
{"title": item["title"], "link": item["link"], "snippet": item["snippet"], "body": body}
)
time.sleep(1) # Sé respetuoso con los servidores
return enriched_results
def arxiv_search(query: str, max_results: int = 2) -> list: # type: ignore[type-arg]
"""
Buscar artículos en Arxiv y devolver los resultados incluyendo resúmenes.
"""
import arxiv
client = arxiv.Client()
search = arxiv.Search(query=query, max_results=max_results, sort_by=arxiv.SortCriterion.Relevance)
results = []
for paper in client.results(search):
results.append(
{
"title": paper.title,
"authors": [author.name for author in paper.authors],
"published": paper.published.strftime("%Y-%m-%d"),
"abstract": paper.summary,
"pdf_url": paper.pdf_url,
}
)
# # Escribir los resultados en un archivo
# with open('arxiv_search_results.json', 'w') as f:
# json.dump(results, f, indent=2)
return results
# Empaquetamos las funciones como herramientas para los agentes
google_search_tool = FunctionTool(
google_search, description="Buscar en Google información, devuelve resultados con fragmentos y contenido"
)
arxiv_search_tool = FunctionTool(
arxiv_search, description="Buscar en Arxiv artículos relacionados con un tema, incluyendo resúmenes"
)
Definición de Agentes
A continuación, definiremos los agentes que realizarán las tareas.
model_client = SaptivaAIChatCompletionClient(
model=LLAMA_MODEL,
api_key="TU_SAPTIVA_API_KEY",
)
google_search_agent = AssistantAgent(
name="Google_Search_Agent",
tools=[google_search_tool],
model_client=model_client,
description="Un agente que puede buscar información en Google, devuelve resultados con fragmentos y contenido",
system_message="Eres un asistente de IA útil. Resuelve tareas utilizando tus herramientas.",
)
arxiv_search_agent = AssistantAgent(
name="Arxiv_Search_Agent",
tools=[arxiv_search_tool],
model_client=model_client,
description="Un agente que puede buscar artículos en Arxiv relacionados con un tema, incluyendo resúmenes",
system_message="Eres un asistente de IA útil. Resuelve tareas utilizando tus herramientas. En particular, puedes considerar la solicitud del usuario y crear una consulta de búsqueda que probablemente devuelva artículos académicos relevantes.",
)
report_agent = AssistantAgent(
name="Report_Agent",
model_client=model_client,
description="Genera un informe basado en un tema dado",
system_message="Eres un asistente útil. Tu tarea es sintetizar los datos extraídos en una revisión de literatura de alta calidad incluyendo referencias CORRECTAS. DEBES escribir un informe final que esté formateado como una revisión de literatura con referencias CORRECTAS. Tu respuesta debe terminar con la palabra 'TERMINATE'",
)
Creación del Equipo
Finalmente, creamos el equipo de agentes y lo configuramos para ejecutar las tareas.
termination = TextMentionTermination("TERMINATE")
team = RoundRobinGroupChat(
participants=[google_search_agent, arxiv_search_agent, report_agent], termination_condition=termination
)
await Console(
team.run_stream(
task="Write a literature review on no code tools for building multi agent ai systems",
)
)
await model_client.close()
---------- user ----------
Write a literature review on no code tools for building multi agent ai systems
---------- Google_Search_Agent ----------
[FunctionCall(id='call_bNGwWFsfeTwDhtIpsI6GYISR', arguments='{"query":"no code tools for building multi agent AI systems literature review","num_results":3}', name='google_search')]
[Prompt tokens: 123, Completion tokens: 29]
---------- Google_Search_Agent ----------
[FunctionExecutionResult(content='[{\'title\': \'Literature Review — AutoGen\', \'link\': \'https://microsoft.github.io/autogen/dev//user-guide/agentchat-user-guide/examples/literature-review.html\', \'snippet\': \'run( task="Write a literature review on no code tools for building multi agent ai systems", ) ... ### Conclusion No-code tools for building multi-agent AI systems\\xa0...\', \'body\': \'Literature Review — AutoGen Skip to main content Back to top Ctrl + K AutoGen 0.4 is a work in progress. Go here to find the 0.2 documentation. User Guide Packages API Reference Twitter GitHub PyPI User Guide Packages API Reference Twitter GitHub PyPI AgentChat Installation Quickstart Tutorial Models Messages Agents Teams Selector Group Chat Swarm Termination Custom Agents Managing State Examples Travel Planning Company Research Literature Review Core Quick Start Core Concepts Agent and\'}, {\'title\': \'Vertex AI Agent Builder | Google Cloud\', \'link\': \'https://cloud.google.com/products/agent-builder\', \'snippet\': \'Build and deploy enterprise ready generative AI experiences · Product highlights · Easily build no code conversational AI agents · Ground in Google search and/or\\xa0...\', \'body\': \'Vertex AI Agent Builder | Google Cloud Page Contents Vertex AI Agent Builder is making generative AI more reliable for the enterprise. Read the blog. Vertex AI Agent Builder Build and deploy enterprise ready generative AI experiences Create AI agents and applications using natural language or a code-first approach. Easily ground your agents or apps in enterprise data with a range of options. Vertex AI Agent Builder gathers all the surfaces and tools that developers need to build their AI agents\'}, {\'title\': \'AI tools I have found useful w/ research. What do you guys think ...\', \'link\': \'https://www.reddit.com/r/PhD/comments/14d6g09/ai_tools_i_have_found_useful_w_research_what_do/\', \'snippet\': "Jun 19, 2023 ... Need help deciding on the best ones, and to identify ones I\'ve missed: ASSISTANTS (chatbots, multi-purpose) Chat with Open Large Language Models.", \'body\': \'Reddit - Dive into anything Skip to main content Open menu Open navigation Go to Reddit Home r/PhD A chip A close button Get app Get the Reddit app Log In Log in to Reddit Expand user menu Open settings menu Log In / Sign Up Advertise on Reddit Shop Collectible Avatars Get the Reddit app Scan this QR code to download the app now Or check it out in the app stores Go to PhD r/PhD r/PhD A subreddit dedicated to PhDs. Members Online • [deleted] ADMIN MOD AI tools I have found useful w/ research.\'}]', call_id='call_bNGwWFsfeTwDhtIpsI6GYISR')]
---------- Google_Search_Agent ----------
Tool calls:
google_search({"query":"no code tools for building multi agent AI systems literature review","num_results":3}) = [{'title': 'Literature Review — AutoGen', 'link': 'https://microsoft.github.io/autogen/dev//user-guide/agentchat-user-guide/examples/literature-review.html', 'snippet': 'run( task="Write a literature review on no code tools for building multi agent ai systems", ) ... ### Conclusion No-code tools for building multi-agent AI systems\xa0...', 'body': 'Literature Review — AutoGen Skip to main content Back to top Ctrl + K AutoGen 0.4 is a work in progress. Go here to find the 0.2 documentation. User Guide Packages API Reference Twitter GitHub PyPI User Guide Packages API Reference Twitter GitHub PyPI AgentChat Installation Quickstart Tutorial Models Messages Agents Teams Selector Group Chat Swarm Termination Custom Agents Managing State Examples Travel Planning Company Research Literature Review Core Quick Start Core Concepts Agent and'}, {'title': 'Vertex AI Agent Builder | Google Cloud', 'link': 'https://cloud.google.com/products/agent-builder', 'snippet': 'Build and deploy enterprise ready generative AI experiences · Product highlights · Easily build no code conversational AI agents · Ground in Google search and/or\xa0...', 'body': 'Vertex AI Agent Builder | Google Cloud Page Contents Vertex AI Agent Builder is making generative AI more reliable for the enterprise. Read the blog. Vertex AI Agent Builder Build and deploy enterprise ready generative AI experiences Create AI agents and applications using natural language or a code-first approach. Easily ground your agents or apps in enterprise data with a range of options. Vertex AI Agent Builder gathers all the surfaces and tools that developers need to build their AI agents'}, {'title': 'AI tools I have found useful w/ research. What do you guys think ...', 'link': 'https://www.reddit.com/r/PhD/comments/14d6g09/ai_tools_i_have_found_useful_w_research_what_do/', 'snippet': "Jun 19, 2023 ... Need help deciding on the best ones, and to identify ones I've missed: ASSISTANTS (chatbots, multi-purpose) Chat with Open Large Language Models.", 'body': 'Reddit - Dive into anything Skip to main content Open menu Open navigation Go to Reddit Home r/PhD A chip A close button Get app Get the Reddit app Log In Log in to Reddit Expand user menu Open settings menu Log In / Sign Up Advertise on Reddit Shop Collectible Avatars Get the Reddit app Scan this QR code to download the app now Or check it out in the app stores Go to PhD r/PhD r/PhD A subreddit dedicated to PhDs. Members Online • [deleted] ADMIN MOD AI tools I have found useful w/ research.'}]
---------- Arxiv_Search_Agent ----------
[FunctionCall(id='call_ZdmwQGTO03X23GeRn6fwDN8q', arguments='{"query":"no code tools for building multi agent AI systems","max_results":5}', name='arxiv_search')]
[Prompt tokens: 719, Completion tokens: 28]
---------- Arxiv_Search_Agent ----------
[FunctionExecutionResult(content='[{\'title\': \'AutoGen Studio: A No-Code Developer Tool for Building and Debugging Multi-Agent Systems\', \'authors\': [\'Victor Dibia\', \'Jingya Chen\', \'Gagan Bansal\', \'Suff Syed\', \'Adam Fourney\', \'Erkang Zhu\', \'Chi Wang\', \'Saleema Amershi\'], \'published\': \'2024-08-09\', \'abstract\': \'Multi-agent systems, where multiple agents (generative AI models + tools)\\ncollaborate, are emerging as an effective pattern for solving long-running,\\ncomplex tasks in numerous domains. However, specifying their parameters (such\\nas models, tools, and orchestration mechanisms etc,.) and debugging them\\nremains challenging for most developers. To address this challenge, we present\\nAUTOGEN STUDIO, a no-code developer tool for rapidly prototyping, debugging,\\nand evaluating multi-agent workflows built upon the AUTOGEN framework. AUTOGEN\\nSTUDIO offers a web interface and a Python API for representing LLM-enabled\\nagents using a declarative (JSON-based) specification. It provides an intuitive\\ndrag-and-drop UI for agent workflow specification, interactive evaluation and\\ndebugging of workflows, and a gallery of reusable agent components. We\\nhighlight four design principles for no-code multi-agent developer tools and\\ncontribute an open-source implementation at\\nhttps://github.com/microsoft/autogen/tree/main/samples/apps/autogen-studio\', \'pdf_url\': \'http://arxiv.org/pdf/2408.15247v1\'}, {\'title\': \'Improving Performance of Commercially Available AI Products in a Multi-Agent Configuration\', \'authors\': [\'Cory Hymel\', \'Sida Peng\', \'Kevin Xu\', \'Charath Ranganathan\'], \'published\': \'2024-10-29\', \'abstract\': \'In recent years, with the rapid advancement of large language models (LLMs),\\nmulti-agent systems have become increasingly more capable of practical\\napplication. At the same time, the software development industry has had a\\nnumber of new AI-powered tools developed that improve the software development\\nlifecycle (SDLC). Academically, much attention has been paid to the role of\\nmulti-agent systems to the SDLC. And, while single-agent systems have\\nfrequently been examined in real-world applications, we have seen comparatively\\nfew real-world examples of publicly available commercial tools working together\\nin a multi-agent system with measurable improvements. In this experiment we\\ntest context sharing between Crowdbotics PRD AI, a tool for generating software\\nrequirements using AI, and GitHub Copilot, an AI pair-programming tool. By\\nsharing business requirements from PRD AI, we improve the code suggestion\\ncapabilities of GitHub Copilot by 13.8% and developer task success rate by\\n24.5% -- demonstrating a real-world example of commercially-available AI\\nsystems working together with improved outcomes.\', \'pdf_url\': \'http://arxiv.org/pdf/2410.22129v1\'}, {\'title\': \'AutoML-Agent: A Multi-Agent LLM Framework for Full-Pipeline AutoML\', \'authors\': [\'Patara Trirat\', \'Wonyong Jeong\', \'Sung Ju Hwang\'], \'published\': \'2024-10-03\', \'abstract\': "Automated machine learning (AutoML) accelerates AI development by automating\\ntasks in the development pipeline, such as optimal model search and\\nhyperparameter tuning. Existing AutoML systems often require technical\\nexpertise to set up complex tools, which is in general time-consuming and\\nrequires a large amount of human effort. Therefore, recent works have started\\nexploiting large language models (LLM) to lessen such burden and increase the\\nusability of AutoML frameworks via a natural language interface, allowing\\nnon-expert users to build their data-driven solutions. These methods, however,\\nare usually designed only for a particular process in the AI development\\npipeline and do not efficiently use the inherent capacity of the LLMs. This\\npaper proposes AutoML-Agent, a novel multi-agent framework tailored for\\nfull-pipeline AutoML, i.e., from data retrieval to model deployment.\\nAutoML-Agent takes user\'s task descriptions, facilitates collaboration between\\nspecialized LLM agents, and delivers deployment-ready models. Unlike existing\\nwork, instead of devising a single plan, we introduce a retrieval-augmented\\nplanning strategy to enhance exploration to search for more optimal plans. We\\nalso decompose each plan into sub-tasks (e.g., data preprocessing and neural\\nnetwork design) each of which is solved by a specialized agent we build via\\nprompting executing in parallel, making the search process more efficient.\\nMoreover, we propose a multi-stage verification to verify executed results and\\nguide the code generation LLM in implementing successful solutions. Extensive\\nexperiments on seven downstream tasks using fourteen datasets show that\\nAutoML-Agent achieves a higher success rate in automating the full AutoML\\nprocess, yielding systems with good performance throughout the diverse domains.", \'pdf_url\': \'http://arxiv.org/pdf/2410.02958v1\'}, {\'title\': \'Enhancing Trust in LLM-Based AI Automation Agents: New Considerations and Future Challenges\', \'authors\': [\'Sivan Schwartz\', \'Avi Yaeli\', \'Segev Shlomov\'], \'published\': \'2023-08-10\', \'abstract\': \'Trust in AI agents has been extensively studied in the literature, resulting\\nin significant advancements in our understanding of this field. However, the\\nrapid advancements in Large Language Models (LLMs) and the emergence of\\nLLM-based AI agent frameworks pose new challenges and opportunities for further\\nresearch. In the field of process automation, a new generation of AI-based\\nagents has emerged, enabling the execution of complex tasks. At the same time,\\nthe process of building automation has become more accessible to business users\\nvia user-friendly no-code tools and training mechanisms. This paper explores\\nthese new challenges and opportunities, analyzes the main aspects of trust in\\nAI agents discussed in existing literature, and identifies specific\\nconsiderations and challenges relevant to this new generation of automation\\nagents. We also evaluate how nascent products in this category address these\\nconsiderations. Finally, we highlight several challenges that the research\\ncommunity should address in this evolving landscape.\', \'pdf_url\': \'http://arxiv.org/pdf/2308.05391v1\'}, {\'title\': \'AI2Apps: A Visual IDE for Building LLM-based AI Agent Applications\', \'authors\': [\'Xin Pang\', \'Zhucong Li\', \'Jiaxiang Chen\', \'Yuan Cheng\', \'Yinghui Xu\', \'Yuan Qi\'], \'published\': \'2024-04-07\', \'abstract\': \'We introduce AI2Apps, a Visual Integrated Development Environment (Visual\\nIDE) with full-cycle capabilities that accelerates developers to build\\ndeployable LLM-based AI agent Applications. This Visual IDE prioritizes both\\nthe Integrity of its development tools and the Visuality of its components,\\nensuring a smooth and efficient building experience.On one hand, AI2Apps\\nintegrates a comprehensive development toolkit ranging from a prototyping\\ncanvas and AI-assisted code editor to agent debugger, management system, and\\ndeployment tools all within a web-based graphical user interface. On the other\\nhand, AI2Apps visualizes reusable front-end and back-end code as intuitive\\ndrag-and-drop components. Furthermore, a plugin system named AI2Apps Extension\\n(AAE) is designed for Extensibility, showcasing how a new plugin with 20\\ncomponents enables web agent to mimic human-like browsing behavior. Our case\\nstudy demonstrates substantial efficiency improvements, with AI2Apps reducing\\ntoken consumption and API calls when debugging a specific sophisticated\\nmultimodal agent by approximately 90% and 80%, respectively. The AI2Apps,\\nincluding an online demo, open-source code, and a screencast video, is now\\npublicly accessible.\', \'pdf_url\': \'http://arxiv.org/pdf/2404.04902v1\'}]', call_id='call_ZdmwQGTO03X23GeRn6fwDN8q')]
---------- Arxiv_Search_Agent ----------
Tool calls:
arxiv_search({"query":"no code tools for building multi agent AI systems","max_results":5}) = [{'title': 'AutoGen Studio: A No-Code Developer Tool for Building and Debugging Multi-Agent Systems', 'authors': ['Victor Dibia', 'Jingya Chen', 'Gagan Bansal', 'Suff Syed', 'Adam Fourney', 'Erkang Zhu', 'Chi Wang', 'Saleema Amershi'], 'published': '2024-08-09', 'abstract': 'Multi-agent systems, where multiple agents (generative AI models + tools)\ncollaborate, are emerging as an effective pattern for solving long-running,\ncomplex tasks in numerous domains. However, specifying their parameters (such\nas models, tools, and orchestration mechanisms etc,.) and debugging them\nremains challenging for most developers. To address this challenge, we present\nAUTOGEN STUDIO, a no-code developer tool for rapidly prototyping, debugging,\nand evaluating multi-agent workflows built upon the AUTOGEN framework. AUTOGEN\nSTUDIO offers a web interface and a Python API for representing LLM-enabled\nagents using a declarative (JSON-based) specification. It provides an intuitive\ndrag-and-drop UI for agent workflow specification, interactive evaluation and\ndebugging of workflows, and a gallery of reusable agent components. We\nhighlight four design principles for no-code multi-agent developer tools and\ncontribute an open-source implementation at\nhttps://github.com/microsoft/autogen/tree/main/samples/apps/autogen-studio', 'pdf_url': 'http://arxiv.org/pdf/2408.15247v1'}, {'title': 'Improving Performance of Commercially Available AI Products in a Multi-Agent Configuration', 'authors': ['Cory Hymel', 'Sida Peng', 'Kevin Xu', 'Charath Ranganathan'], 'published': '2024-10-29', 'abstract': 'In recent years, with the rapid advancement of large language models (LLMs),\nmulti-agent systems have become increasingly more capable of practical\napplication. At the same time, the software development industry has had a\nnumber of new AI-powered tools developed that improve the software development\nlifecycle (SDLC). Academically, much attention has been paid to the role of\nmulti-agent systems to the SDLC. And, while single-agent systems have\nfrequently been examined in real-world applications, we have seen comparatively\nfew real-world examples of publicly available commercial tools working together\nin a multi-agent system with measurable improvements. In this experiment we\ntest context sharing between Crowdbotics PRD AI, a tool for generating software\nrequirements using AI, and GitHub Copilot, an AI pair-programming tool. By\nsharing business requirements from PRD AI, we improve the code suggestion\ncapabilities of GitHub Copilot by 13.8% and developer task success rate by\n24.5% -- demonstrating a real-world example of commercially-available AI\nsystems working together with improved outcomes.', 'pdf_url': 'http://arxiv.org/pdf/2410.22129v1'}, {'title': 'AutoML-Agent: A Multi-Agent LLM Framework for Full-Pipeline AutoML', 'authors': ['Patara Trirat', 'Wonyong Jeong', 'Sung Ju Hwang'], 'published': '2024-10-03', 'abstract': "Automated machine learning (AutoML) accelerates AI development by automating\ntasks in the development pipeline, such as optimal model search and\nhyperparameter tuning. Existing AutoML systems often require technical\nexpertise to set up complex tools, which is in general time-consuming and\nrequires a large amount of human effort. Therefore, recent works have started\nexploiting large language models (LLM) to lessen such burden and increase the\nusability of AutoML frameworks via a natural language interface, allowing\nnon-expert users to build their data-driven solutions. These methods, however,\nare usually designed only for a particular process in the AI development\npipeline and do not efficiently use the inherent capacity of the LLMs. This\npaper proposes AutoML-Agent, a novel multi-agent framework tailored for\nfull-pipeline AutoML, i.e., from data retrieval to model deployment.\nAutoML-Agent takes user's task descriptions, facilitates collaboration between\nspecialized LLM agents, and delivers deployment-ready models. Unlike existing\nwork, instead of devising a single plan, we introduce a retrieval-augmented\nplanning strategy to enhance exploration to search for more optimal plans. We\nalso decompose each plan into sub-tasks (e.g., data preprocessing and neural\nnetwork design) each of which is solved by a specialized agent we build via\nprompting executing in parallel, making the search process more efficient.\nMoreover, we propose a multi-stage verification to verify executed results and\nguide the code generation LLM in implementing successful solutions. Extensive\nexperiments on seven downstream tasks using fourteen datasets show that\nAutoML-Agent achieves a higher success rate in automating the full AutoML\nprocess, yielding systems with good performance throughout the diverse domains.", 'pdf_url': 'http://arxiv.org/pdf/2410.02958v1'}, {'title': 'Enhancing Trust in LLM-Based AI Automation Agents: New Considerations and Future Challenges', 'authors': ['Sivan Schwartz', 'Avi Yaeli', 'Segev Shlomov'], 'published': '2023-08-10', 'abstract': 'Trust in AI agents has been extensively studied in the literature, resulting\nin significant advancements in our understanding of this field. However, the\nrapid advancements in Large Language Models (LLMs) and the emergence of\nLLM-based AI agent frameworks pose new challenges and opportunities for further\nresearch. In the field of process automation, a new generation of AI-based\nagents has emerged, enabling the execution of complex tasks. At the same time,\nthe process of building automation has become more accessible to business users\nvia user-friendly no-code tools and training mechanisms. This paper explores\nthese new challenges and opportunities, analyzes the main aspects of trust in\nAI agents discussed in existing literature, and identifies specific\nconsiderations and challenges relevant to this new generation of automation\nagents. We also evaluate how nascent products in this category address these\nconsiderations. Finally, we highlight several challenges that the research\ncommunity should address in this evolving landscape.', 'pdf_url': 'http://arxiv.org/pdf/2308.05391v1'}, {'title': 'AI2Apps: A Visual IDE for Building LLM-based AI Agent Applications', 'authors': ['Xin Pang', 'Zhucong Li', 'Jiaxiang Chen', 'Yuan Cheng', 'Yinghui Xu', 'Yuan Qi'], 'published': '2024-04-07', 'abstract': 'We introduce AI2Apps, a Visual Integrated Development Environment (Visual\nIDE) with full-cycle capabilities that accelerates developers to build\ndeployable LLM-based AI agent Applications. This Visual IDE prioritizes both\nthe Integrity of its development tools and the Visuality of its components,\nensuring a smooth and efficient building experience.On one hand, AI2Apps\nintegrates a comprehensive development toolkit ranging from a prototyping\ncanvas and AI-assisted code editor to agent debugger, management system, and\ndeployment tools all within a web-based graphical user interface. On the other\nhand, AI2Apps visualizes reusable front-end and back-end code as intuitive\ndrag-and-drop components. Furthermore, a plugin system named AI2Apps Extension\n(AAE) is designed for Extensibility, showcasing how a new plugin with 20\ncomponents enables web agent to mimic human-like browsing behavior. Our case\nstudy demonstrates substantial efficiency improvements, with AI2Apps reducing\ntoken consumption and API calls when debugging a specific sophisticated\nmultimodal agent by approximately 90% and 80%, respectively. The AI2Apps,\nincluding an online demo, open-source code, and a screencast video, is now\npublicly accessible.', 'pdf_url': 'http://arxiv.org/pdf/2404.04902v1'}]
---------- Report_Agent ----------
## Literature Review on No-Code Tools for Building Multi-Agent AI Systems
### Introduction
The emergence of multi-agent systems (MAS) has transformed various domains by enabling collaboration among multiple agents—ranging from generative AI models to orchestrated tools—to solve complex, long-term tasks. However, the traditional development of these systems often requires substantial technical expertise, making it inaccessible for non-developers. The introduction of no-code platforms aims to shift this paradigm, allowing users without formal programming knowledge to design, debug, and deploy multi-agent systems. This review synthesizes current literature concerning no-code tools developed for building multi-agent AI systems, highlighting recent advancements and emerging trends.
### No-Code Development Tools
#### AutoGen Studio
One of the prominent no-code tools is **AutoGen Studio**, developed by Dibia et al. (2024). This tool provides a web interface and a declarative specification method utilizing JSON, enabling rapid prototyping, debugging, and evaluating multi-agent workflows. The drag-and-drop capabilities streamline the design process, making complex interactions between agents more manageable. The framework operates on four primary design principles that cater specifically to no-code development, contributing to an accessible pathway for users to harness multi-agent frameworks for various applications (Dibia et al., 2024).
#### AI2Apps Visual IDE
Another notable tool is **AI2Apps**, described by Pang et al. (2024). It serves as a Visual Integrated Development Environment that incorporates a comprehensive set of tools from prototyping to deployment. The platform's user-friendly interface allows for the visualization of code through drag-and-drop components, facilitating smoother integration of different agents. An extension system enhances the platform's capabilities, showcasing the potential for customization and scalability in agent application development. The reported efficiency improvements in token consumption and API calls indicate substantial benefits in user-centric design (Pang et al., 2024).
### Performance Enhancements in Multi-Agent Configurations
Hymel et al. (2024) examined the collaborative performance of commercially available AI tools, demonstrating a measurable improvement when integrating multiple agents in a shared configuration. Their experiments showcased how cooperation between tools like Crowdbotics PRD AI and GitHub Copilot significantly improved task success rates, illustrating the practical benefits of employing no-code tools in multi-agent environments. This synergy reflects the critical need for frameworks that inherently support such integrations, especially through no-code mechanisms, to enhance user experience and productivity (Hymel et al., 2024).
### Trust and Usability in AI Agents
The concept of trust in AI, particularly in LLM-based automation agents, has gained attention. Schwartz et al. (2023) addressed the challenges and considerations unique to this new generation of agents, highlighting how no-code platforms ease access and usability for non-technical users. The paper emphasizes the need for further research into the trust factors integral to effective multi-agent systems, advocating for a user-centric approach in the design and evaluation of these no-code tools (Schwartz et al., 2023).
### Full-Pipeline AutoML with Multi-Agent Systems
The **AutoML-Agent** framework proposed by Trirat et al. (2024) brings another layer of innovation to the no-code landscape. This framework enhances existing automated machine learning processes by using multiple specialized agents that collaboratively manage the full AI development pipeline from data retrieval to model deployment. The novelty lies in its retrieval-augmented planning strategy, which allows for efficient task decomposition and parallel execution, optimizing the overall development experience for non-experts (Trirat et al., 2024).
### Conclusion
The literature presents a growing array of no-code tools designed to democratize the development of multi-agent systems. Innovations such as AutoGen Studio, AI2Apps, and collaborative frameworks like AutoML-Agent highlight a trend towards user-centric, efficient design that encourages participation beyond technical boundaries. Future research should continue to explore aspects of trust, usability, and integration to further refine these tools and expand their applicability across various domains.
### References
- Dibia, V., Chen, J., Bansal, G., Syed, S., Fourney, A., Zhu, E., Wang, C., & Amershi, S. (2024). AutoGen Studio: A No-Code Developer Tool for Building and Debugging Multi-Agent Systems. *arXiv:2408.15247*.
- Hymel, C., Peng, S., Xu, K., & Ranganathan, C. (2024). Improving Performance of Commercially Available AI Products in a Multi-Agent Configuration. *arXiv:2410.22129*.
- Pang, X., Li, Z., Chen, J., Cheng, Y., Xu, Y., & Qi, Y. (2024). AI2Apps: A Visual IDE for Building LLM-based AI Agent Applications. *arXiv:2404.04902*.
- Schwartz, S., Yaeli, A., & Shlomov, S. (2023). Enhancing Trust in LLM-Based AI Automation Agents: New Considerations and Future Challenges. *arXiv:2308.05391*.
- Trirat, P., Jeong, W., & Hwang, S. J. (2024). AutoML-Agent: A Multi-Agent LLM Framework for Full-Pipeline AutoML. *arXiv:2410.02958*.
TERMINATE
[Prompt tokens: 2381, Completion tokens: 1090]
---------- Summary ----------
Number of messages: 8
Finish reason: Text 'TERMINATE' mentioned
Total prompt tokens: 3223
Total completion tokens: 1147
Duration: 17.06 seconds
TaskResult(messages=[TextMessage(source='user', models_usage=None, content='Write a literature review on no code tools for building multi agent ai systems', type='TextMessage'), ToolCallRequestEvent(source='Google_Search_Agent', models_usage=RequestUsage(prompt_tokens=123, completion_tokens=29), content=[FunctionCall(id='call_bNGwWFsfeTwDhtIpsI6GYISR', arguments='{"query":"no code tools for building multi agent AI systems literature review","num_results":3}', name='google_search')], type='ToolCallRequestEvent'), ToolCallExecutionEvent(source='Google_Search_Agent', models_usage=None, content=[FunctionExecutionResult(content='[{\'title\': \'Literature Review — AutoGen\', \'link\': \'https://microsoft.github.io/autogen/dev//user-guide/agentchat-user-guide/examples/literature-review.html\', \'snippet\': \'run( task="Write a literature review on no code tools for building multi agent ai systems", ) ... ### Conclusion No-code tools for building multi-agent AI systems\\xa0...\', \'body\': \'Literature Review — AutoGen Skip to main content Back to top Ctrl + K AutoGen 0.4 is a work in progress. Go here to find the 0.2 documentation. User Guide Packages API Reference Twitter GitHub PyPI User Guide Packages API Reference Twitter GitHub PyPI AgentChat Installation Quickstart Tutorial Models Messages Agents Teams Selector Group Chat Swarm Termination Custom Agents Managing State Examples Travel Planning Company Research Literature Review Core Quick Start Core Concepts Agent and\'}, {\'title\': \'Vertex AI Agent Builder | Google Cloud\', \'link\': \'https://cloud.google.com/products/agent-builder\', \'snippet\': \'Build and deploy enterprise ready generative AI experiences · Product highlights · Easily build no code conversational AI agents · Ground in Google search and/or\\xa0...\', \'body\': \'Vertex AI Agent Builder | Google Cloud Page Contents Vertex AI Agent Builder is making generative AI more reliable for the enterprise. Read the blog. Vertex AI Agent Builder Build and deploy enterprise ready generative AI experiences Create AI agents and applications using natural language or a code-first approach. Easily ground your agents or apps in enterprise data with a range of options. Vertex AI Agent Builder gathers all the surfaces and tools that developers need to build their AI agents\'}, {\'title\': \'AI tools I have found useful w/ research. What do you guys think ...\', \'link\': \'https://www.reddit.com/r/PhD/comments/14d6g09/ai_tools_i_have_found_useful_w_research_what_do/\', \'snippet\': "Jun 19, 2023 ... Need help deciding on the best ones, and to identify ones I\'ve missed: ASSISTANTS (chatbots, multi-purpose) Chat with Open Large Language Models.", \'body\': \'Reddit - Dive into anything Skip to main content Open menu Open navigation Go to Reddit Home r/PhD A chip A close button Get app Get the Reddit app Log In Log in to Reddit Expand user menu Open settings menu Log In / Sign Up Advertise on Reddit Shop Collectible Avatars Get the Reddit app Scan this QR code to download the app now Or check it out in the app stores Go to PhD r/PhD r/PhD A subreddit dedicated to PhDs. Members Online • [deleted] ADMIN MOD AI tools I have found useful w/ research.\'}]', call_id='call_bNGwWFsfeTwDhtIpsI6GYISR')], type='ToolCallExecutionEvent'), TextMessage(source='Google_Search_Agent', models_usage=None, content='Tool calls:\ngoogle_search({"query":"no code tools for building multi agent AI systems literature review","num_results":3}) = [{\'title\': \'Literature Review — AutoGen\', \'link\': \'https://microsoft.github.io/autogen/dev//user-guide/agentchat-user-guide/examples/literature-review.html\', \'snippet\': \'run( task="Write a literature review on no code tools for building multi agent ai systems", ) ... ### Conclusion No-code tools for building multi-agent AI systems\\xa0...\', \'body\': \'Literature Review — AutoGen Skip to main content Back to top Ctrl + K AutoGen 0.4 is a work in progress. Go here to find the 0.2 documentation. User Guide Packages API Reference Twitter GitHub PyPI User Guide Packages API Reference Twitter GitHub PyPI AgentChat Installation Quickstart Tutorial Models Messages Agents Teams Selector Group Chat Swarm Termination Custom Agents Managing State Examples Travel Planning Company Research Literature Review Core Quick Start Core Concepts Agent and\'}, {\'title\': \'Vertex AI Agent Builder | Google Cloud\', \'link\': \'https://cloud.google.com/products/agent-builder\', \'snippet\': \'Build and deploy enterprise ready generative AI experiences · Product highlights · Easily build no code conversational AI agents · Ground in Google search and/or\\xa0...\', \'body\': \'Vertex AI Agent Builder | Google Cloud Page Contents Vertex AI Agent Builder is making generative AI more reliable for the enterprise. Read the blog. Vertex AI Agent Builder Build and deploy enterprise ready generative AI experiences Create AI agents and applications using natural language or a code-first approach. Easily ground your agents or apps in enterprise data with a range of options. Vertex AI Agent Builder gathers all the surfaces and tools that developers need to build their AI agents\'}, {\'title\': \'AI tools I have found useful w/ research. What do you guys think ...\', \'link\': \'https://www.reddit.com/r/PhD/comments/14d6g09/ai_tools_i_have_found_useful_w_research_what_do/\', \'snippet\': "Jun 19, 2023 ... Need help deciding on the best ones, and to identify ones I\'ve missed: ASSISTANTS (chatbots, multi-purpose) Chat with Open Large Language Models.", \'body\': \'Reddit - Dive into anything Skip to main content Open menu Open navigation Go to Reddit Home r/PhD A chip A close button Get app Get the Reddit app Log In Log in to Reddit Expand user menu Open settings menu Log In / Sign Up Advertise on Reddit Shop Collectible Avatars Get the Reddit app Scan this QR code to download the app now Or check it out in the app stores Go to PhD r/PhD r/PhD A subreddit dedicated to PhDs. Members Online • [deleted] ADMIN MOD AI tools I have found useful w/ research.\'}]', type='TextMessage'), ToolCallRequestEvent(source='Arxiv_Search_Agent', models_usage=RequestUsage(prompt_tokens=719, completion_tokens=28), content=[FunctionCall(id='call_ZdmwQGTO03X23GeRn6fwDN8q', arguments='{"query":"no code tools for building multi agent AI systems","max_results":5}', name='arxiv_search')], type='ToolCallRequestEvent'), ToolCallExecutionEvent(source='Arxiv_Search_Agent', models_usage=None, content=[FunctionExecutionResult(content='[{\'title\': \'AutoGen Studio: A No-Code Developer Tool for Building and Debugging Multi-Agent Systems\', \'authors\': [\'Victor Dibia\', \'Jingya Chen\', \'Gagan Bansal\', \'Suff Syed\', \'Adam Fourney\', \'Erkang Zhu\', \'Chi Wang\', \'Saleema Amershi\'], \'published\': \'2024-08-09\', \'abstract\': \'Multi-agent systems, where multiple agents (generative AI models + tools)\\ncollaborate, are emerging as an effective pattern for solving long-running,\\ncomplex tasks in numerous domains. However, specifying their parameters (such\\nas models, tools, and orchestration mechanisms etc,.) and debugging them\\nremains challenging for most developers. To address this challenge, we present\\nAUTOGEN STUDIO, a no-code developer tool for rapidly prototyping, debugging,\\nand evaluating multi-agent workflows built upon the AUTOGEN framework. AUTOGEN\\nSTUDIO offers a web interface and a Python API for representing LLM-enabled\\nagents using a declarative (JSON-based) specification. It provides an intuitive\\ndrag-and-drop UI for agent workflow specification, interactive evaluation and\\ndebugging of workflows, and a gallery of reusable agent components. We\\nhighlight four design principles for no-code multi-agent developer tools and\\ncontribute an open-source implementation at\\nhttps://github.com/microsoft/autogen/tree/main/samples/apps/autogen-studio\', \'pdf_url\': \'http://arxiv.org/pdf/2408.15247v1\'}, {\'title\': \'Improving Performance of Commercially Available AI Products in a Multi-Agent Configuration\', \'authors\': [\'Cory Hymel\', \'Sida Peng\', \'Kevin Xu\', \'Charath Ranganathan\'], \'published\': \'2024-10-29\', \'abstract\': \'In recent years, with the rapid advancement of large language models (LLMs),\\nmulti-agent systems have become increasingly more capable of practical\\napplication. At the same time, the software development industry has had a\\nnumber of new AI-powered tools developed that improve the software development\\nlifecycle (SDLC). Academically, much attention has been paid to the role of\\nmulti-agent systems to the SDLC. And, while single-agent systems have\\nfrequently been examined in real-world applications, we have seen comparatively\\nfew real-world examples of publicly available commercial tools working together\\nin a multi-agent system with measurable improvements. In this experiment we\\ntest context sharing between Crowdbotics PRD AI, a tool for generating software\\nrequirements using AI, and GitHub Copilot, an AI pair-programming tool. By\\nsharing business requirements from PRD AI, we improve the code suggestion\\ncapabilities of GitHub Copilot by 13.8% and developer task success rate by\\n24.5% -- demonstrating a real-world example of commercially-available AI\\nsystems working together with improved outcomes.\', \'pdf_url\': \'http://arxiv.org/pdf/2410.22129v1\'}, {\'title\': \'AutoML-Agent: A Multi-Agent LLM Framework for Full-Pipeline AutoML\', \'authors\': [\'Patara Trirat\', \'Wonyong Jeong\', \'Sung Ju Hwang\'], \'published\': \'2024-10-03\', \'abstract\': "Automated machine learning (AutoML) accelerates AI development by automating\\ntasks in the development pipeline, such as optimal model search and\\nhyperparameter tuning. Existing AutoML systems often require technical\\nexpertise to set up complex tools, which is in general time-consuming and\\nrequires a large amount of human effort. Therefore, recent works have started\\nexploiting large language models (LLM) to lessen such burden and increase the\\nusability of AutoML frameworks via a natural language interface, allowing\\nnon-expert users to build their data-driven solutions. These methods, however,\\nare usually designed only for a particular process in the AI development\\npipeline and do not efficiently use the inherent capacity of the LLMs. This\\npaper proposes AutoML-Agent, a novel multi-agent framework tailored for\\nfull-pipeline AutoML, i.e., from data retrieval to model deployment.\\nAutoML-Agent takes user\'s task descriptions, facilitates collaboration between\\nspecialized LLM agents, and delivers deployment-ready models. Unlike existing\\nwork, instead of devising a single plan, we introduce a retrieval-augmented\\nplanning strategy to enhance exploration to search for more optimal plans. We\\nalso decompose each plan into sub-tasks (e.g., data preprocessing and neural\\nnetwork design) each of which is solved by a specialized agent we build via\\nprompting executing in parallel, making the search process more efficient.\\nMoreover, we propose a multi-stage verification to verify executed results and\\nguide the code generation LLM in implementing successful solutions. Extensive\\nexperiments on seven downstream tasks using fourteen datasets show that\\nAutoML-Agent achieves a higher success rate in automating the full AutoML\\nprocess, yielding systems with good performance throughout the diverse domains.", \'pdf_url\': \'http://arxiv.org/pdf/2410.02958v1\'}, {\'title\': \'Enhancing Trust in LLM-Based AI Automation Agents: New Considerations and Future Challenges\', \'authors\': [\'Sivan Schwartz\', \'Avi Yaeli\', \'Segev Shlomov\'], \'published\': \'2023-08-10\', \'abstract\': \'Trust in AI agents has been extensively studied in the literature, resulting\\nin significant advancements in our understanding of this field. However, the\\nrapid advancements in Large Language Models (LLMs) and the emergence of\\nLLM-based AI agent frameworks pose new challenges and opportunities for further\\nresearch. In the field of process automation, a new generation of AI-based\\nagents has emerged, enabling the execution of complex tasks. At the same time,\\nthe process of building automation has become more accessible to business users\\nvia user-friendly no-code tools and training mechanisms. This paper explores\\nthese new challenges and opportunities, analyzes the main aspects of trust in\\nAI agents discussed in existing literature, and identifies specific\\nconsiderations and challenges relevant to this new generation of automation\\nagents. We also evaluate how nascent products in this category address these\\nconsiderations. Finally, we highlight several challenges that the research\\ncommunity should address in this evolving landscape.\', \'pdf_url\': \'http://arxiv.org/pdf/2308.05391v1\'}, {\'title\': \'AI2Apps: A Visual IDE for Building LLM-based AI Agent Applications\', \'authors\': [\'Xin Pang\', \'Zhucong Li\', \'Jiaxiang Chen\', \'Yuan Cheng\', \'Yinghui Xu\', \'Yuan Qi\'], \'published\': \'2024-04-07\', \'abstract\': \'We introduce AI2Apps, a Visual Integrated Development Environment (Visual\\nIDE) with full-cycle capabilities that accelerates developers to build\\ndeployable LLM-based AI agent Applications. This Visual IDE prioritizes both\\nthe Integrity of its development tools and the Visuality of its components,\\nensuring a smooth and efficient building experience.On one hand, AI2Apps\\nintegrates a comprehensive development toolkit ranging from a prototyping\\ncanvas and AI-assisted code editor to agent debugger, management system, and\\ndeployment tools all within a web-based graphical user interface. On the other\\nhand, AI2Apps visualizes reusable front-end and back-end code as intuitive\\ndrag-and-drop components. Furthermore, a plugin system named AI2Apps Extension\\n(AAE) is designed for Extensibility, showcasing how a new plugin with 20\\ncomponents enables web agent to mimic human-like browsing behavior. Our case\\nstudy demonstrates substantial efficiency improvements, with AI2Apps reducing\\ntoken consumption and API calls when debugging a specific sophisticated\\nmultimodal agent by approximately 90% and 80%, respectively. The AI2Apps,\\nincluding an online demo, open-source code, and a screencast video, is now\\npublicly accessible.\', \'pdf_url\': \'http://arxiv.org/pdf/2404.04902v1\'}]', call_id='call_ZdmwQGTO03X23GeRn6fwDN8q')], type='ToolCallExecutionEvent'), TextMessage(source='Arxiv_Search_Agent', models_usage=None, content='Tool calls:\narxiv_search({"query":"no code tools for building multi agent AI systems","max_results":5}) = [{\'title\': \'AutoGen Studio: A No-Code Developer Tool for Building and Debugging Multi-Agent Systems\', \'authors\': [\'Victor Dibia\', \'Jingya Chen\', \'Gagan Bansal\', \'Suff Syed\', \'Adam Fourney\', \'Erkang Zhu\', \'Chi Wang\', \'Saleema Amershi\'], \'published\': \'2024-08-09\', \'abstract\': \'Multi-agent systems, where multiple agents (generative AI models + tools)\\ncollaborate, are emerging as an effective pattern for solving long-running,\\ncomplex tasks in numerous domains. However, specifying their parameters (such\\nas models, tools, and orchestration mechanisms etc,.) and debugging them\\nremains challenging for most developers. To address this challenge, we present\\nAUTOGEN STUDIO, a no-code developer tool for rapidly prototyping, debugging,\\nand evaluating multi-agent workflows built upon the AUTOGEN framework. AUTOGEN\\nSTUDIO offers a web interface and a Python API for representing LLM-enabled\\nagents using a declarative (JSON-based) specification. It provides an intuitive\\ndrag-and-drop UI for agent workflow specification, interactive evaluation and\\ndebugging of workflows, and a gallery of reusable agent components. We\\nhighlight four design principles for no-code multi-agent developer tools and\\ncontribute an open-source implementation at\\nhttps://github.com/microsoft/autogen/tree/main/samples/apps/autogen-studio\', \'pdf_url\': \'http://arxiv.org/pdf/2408.15247v1\'}, {\'title\': \'Improving Performance of Commercially Available AI Products in a Multi-Agent Configuration\', \'authors\': [\'Cory Hymel\', \'Sida Peng\', \'Kevin Xu\', \'Charath Ranganathan\'], \'published\': \'2024-10-29\', \'abstract\': \'In recent years, with the rapid advancement of large language models (LLMs),\\nmulti-agent systems have become increasingly more capable of practical\\napplication. At the same time, the software development industry has had a\\nnumber of new AI-powered tools developed that improve the software development\\nlifecycle (SDLC). Academically, much attention has been paid to the role of\\nmulti-agent systems to the SDLC. And, while single-agent systems have\\nfrequently been examined in real-world applications, we have seen comparatively\\nfew real-world examples of publicly available commercial tools working together\\nin a multi-agent system with measurable improvements. In this experiment we\\ntest context sharing between Crowdbotics PRD AI, a tool for generating software\\nrequirements using AI, and GitHub Copilot, an AI pair-programming tool. By\\nsharing business requirements from PRD AI, we improve the code suggestion\\ncapabilities of GitHub Copilot by 13.8% and developer task success rate by\\n24.5% -- demonstrating a real-world example of commercially-available AI\\nsystems working together with improved outcomes.\', \'pdf_url\': \'http://arxiv.org/pdf/2410.22129v1\'}, {\'title\': \'AutoML-Agent: A Multi-Agent LLM Framework for Full-Pipeline AutoML\', \'authors\': [\'Patara Trirat\', \'Wonyong Jeong\', \'Sung Ju Hwang\'], \'published\': \'2024-10-03\', \'abstract\': "Automated machine learning (AutoML) accelerates AI development by automating\\ntasks in the development pipeline, such as optimal model search and\\nhyperparameter tuning. Existing AutoML systems often require technical\\nexpertise to set up complex tools, which is in general time-consuming and\\nrequires a large amount of human effort. Therefore, recent works have started\\nexploiting large language models (LLM) to lessen such burden and increase the\\nusability of AutoML frameworks via a natural language interface, allowing\\nnon-expert users to build their data-driven solutions. These methods, however,\\nare usually designed only for a particular process in the AI development\\npipeline and do not efficiently use the inherent capacity of the LLMs. This\\npaper proposes AutoML-Agent, a novel multi-agent framework tailored for\\nfull-pipeline AutoML, i.e., from data retrieval to model deployment.\\nAutoML-Agent takes user\'s task descriptions, facilitates collaboration between\\nspecialized LLM agents, and delivers deployment-ready models. Unlike existing\\nwork, instead of devising a single plan, we introduce a retrieval-augmented\\nplanning strategy to enhance exploration to search for more optimal plans. We\\nalso decompose each plan into sub-tasks (e.g., data preprocessing and neural\\nnetwork design) each of which is solved by a specialized agent we build via\\nprompting executing in parallel, making the search process more efficient.\\nMoreover, we propose a multi-stage verification to verify executed results and\\nguide the code generation LLM in implementing successful solutions. Extensive\\nexperiments on seven downstream tasks using fourteen datasets show that\\nAutoML-Agent achieves a higher success rate in automating the full AutoML\\nprocess, yielding systems with good performance throughout the diverse domains.", \'pdf_url\': \'http://arxiv.org/pdf/2410.02958v1\'}, {\'title\': \'Enhancing Trust in LLM-Based AI Automation Agents: New Considerations and Future Challenges\', \'authors\': [\'Sivan Schwartz\', \'Avi Yaeli\', \'Segev Shlomov\'], \'published\': \'2023-08-10\', \'abstract\': \'Trust in AI agents has been extensively studied in the literature, resulting\\nin significant advancements in our understanding of this field. However, the\\nrapid advancements in Large Language Models (LLMs) and the emergence of\\nLLM-based AI agent frameworks pose new challenges and opportunities for further\\nresearch. In the field of process automation, a new generation of AI-based\\nagents has emerged, enabling the execution of complex tasks. At the same time,\\nthe process of building automation has become more accessible to business users\\nvia user-friendly no-code tools and training mechanisms. This paper explores\\nthese new challenges and opportunities, analyzes the main aspects of trust in\\nAI agents discussed in existing literature, and identifies specific\\nconsiderations and challenges relevant to this new generation of automation\\nagents. We also evaluate how nascent products in this category address these\\nconsiderations. Finally, we highlight several challenges that the research\\ncommunity should address in this evolving landscape.\', \'pdf_url\': \'http://arxiv.org/pdf/2308.05391v1\'}, {\'title\': \'AI2Apps: A Visual IDE for Building LLM-based AI Agent Applications\', \'authors\': [\'Xin Pang\', \'Zhucong Li\', \'Jiaxiang Chen\', \'Yuan Cheng\', \'Yinghui Xu\', \'Yuan Qi\'], \'published\': \'2024-04-07\', \'abstract\': \'We introduce AI2Apps, a Visual Integrated Development Environment (Visual\\nIDE) with full-cycle capabilities that accelerates developers to build\\ndeployable LLM-based AI agent Applications. This Visual IDE prioritizes both\\nthe Integrity of its development tools and the Visuality of its components,\\nensuring a smooth and efficient building experience.On one hand, AI2Apps\\nintegrates a comprehensive development toolkit ranging from a prototyping\\ncanvas and AI-assisted code editor to agent debugger, management system, and\\ndeployment tools all within a web-based graphical user interface. On the other\\nhand, AI2Apps visualizes reusable front-end and back-end code as intuitive\\ndrag-and-drop components. Furthermore, a plugin system named AI2Apps Extension\\n(AAE) is designed for Extensibility, showcasing how a new plugin with 20\\ncomponents enables web agent to mimic human-like browsing behavior. Our case\\nstudy demonstrates substantial efficiency improvements, with AI2Apps reducing\\ntoken consumption and API calls when debugging a specific sophisticated\\nmultimodal agent by approximately 90% and 80%, respectively. The AI2Apps,\\nincluding an online demo, open-source code, and a screencast video, is now\\npublicly accessible.\', \'pdf_url\': \'http://arxiv.org/pdf/2404.04902v1\'}]', type='TextMessage'), TextMessage(source='Report_Agent', models_usage=RequestUsage(prompt_tokens=2381, completion_tokens=1090), content="## Literature Review on No-Code Tools for Building Multi-Agent AI Systems\n\n### Introduction\n\nThe emergence of multi-agent systems (MAS) has transformed various domains by enabling collaboration among multiple agents—ranging from generative AI models to orchestrated tools—to solve complex, long-term tasks. However, the traditional development of these systems often requires substantial technical expertise, making it inaccessible for non-developers. The introduction of no-code platforms aims to shift this paradigm, allowing users without formal programming knowledge to design, debug, and deploy multi-agent systems. This review synthesizes current literature concerning no-code tools developed for building multi-agent AI systems, highlighting recent advancements and emerging trends.\n\n### No-Code Development Tools\n\n#### AutoGen Studio\n\nOne of the prominent no-code tools is **AutoGen Studio**, developed by Dibia et al. (2024). This tool provides a web interface and a declarative specification method utilizing JSON, enabling rapid prototyping, debugging, and evaluating multi-agent workflows. The drag-and-drop capabilities streamline the design process, making complex interactions between agents more manageable. The framework operates on four primary design principles that cater specifically to no-code development, contributing to an accessible pathway for users to harness multi-agent frameworks for various applications (Dibia et al., 2024).\n\n#### AI2Apps Visual IDE\n\nAnother notable tool is **AI2Apps**, described by Pang et al. (2024). It serves as a Visual Integrated Development Environment that incorporates a comprehensive set of tools from prototyping to deployment. The platform's user-friendly interface allows for the visualization of code through drag-and-drop components, facilitating smoother integration of different agents. An extension system enhances the platform's capabilities, showcasing the potential for customization and scalability in agent application development. The reported efficiency improvements in token consumption and API calls indicate substantial benefits in user-centric design (Pang et al., 2024).\n\n### Performance Enhancements in Multi-Agent Configurations\n\nHymel et al. (2024) examined the collaborative performance of commercially available AI tools, demonstrating a measurable improvement when integrating multiple agents in a shared configuration. Their experiments showcased how cooperation between tools like Crowdbotics PRD AI and GitHub Copilot significantly improved task success rates, illustrating the practical benefits of employing no-code tools in multi-agent environments. This synergy reflects the critical need for frameworks that inherently support such integrations, especially through no-code mechanisms, to enhance user experience and productivity (Hymel et al., 2024).\n\n### Trust and Usability in AI Agents\n\nThe concept of trust in AI, particularly in LLM-based automation agents, has gained attention. Schwartz et al. (2023) addressed the challenges and considerations unique to this new generation of agents, highlighting how no-code platforms ease access and usability for non-technical users. The paper emphasizes the need for further research into the trust factors integral to effective multi-agent systems, advocating for a user-centric approach in the design and evaluation of these no-code tools (Schwartz et al., 2023).\n\n### Full-Pipeline AutoML with Multi-Agent Systems\n\nThe **AutoML-Agent** framework proposed by Trirat et al. (2024) brings another layer of innovation to the no-code landscape. This framework enhances existing automated machine learning processes by using multiple specialized agents that collaboratively manage the full AI development pipeline from data retrieval to model deployment. The novelty lies in its retrieval-augmented planning strategy, which allows for efficient task decomposition and parallel execution, optimizing the overall development experience for non-experts (Trirat et al., 2024).\n\n### Conclusion\n\nThe literature presents a growing array of no-code tools designed to democratize the development of multi-agent systems. Innovations such as AutoGen Studio, AI2Apps, and collaborative frameworks like AutoML-Agent highlight a trend towards user-centric, efficient design that encourages participation beyond technical boundaries. Future research should continue to explore aspects of trust, usability, and integration to further refine these tools and expand their applicability across various domains.\n\n### References\n\n- Dibia, V., Chen, J., Bansal, G., Syed, S., Fourney, A., Zhu, E., Wang, C., & Amershi, S. (2024). AutoGen Studio: A No-Code Developer Tool for Building and Debugging Multi-Agent Systems. *arXiv:2408.15247*.\n- Hymel, C., Peng, S., Xu, K., & Ranganathan, C. (2024). Improving Performance of Commercially Available AI Products in a Multi-Agent Configuration. *arXiv:2410.22129*.\n- Pang, X., Li, Z., Chen, J., Cheng, Y., Xu, Y., & Qi, Y. (2024). AI2Apps: A Visual IDE for Building LLM-based AI Agent Applications. *arXiv:2404.04902*.\n- Schwartz, S., Yaeli, A., & Shlomov, S. (2023). Enhancing Trust in LLM-Based AI Automation Agents: New Considerations and Future Challenges. *arXiv:2308.05391*.\n- Trirat, P., Jeong, W., & Hwang, S. J. (2024). AutoML-Agent: A Multi-Agent LLM Framework for Full-Pipeline AutoML. *arXiv:2410.02958*.\n\nTERMINATE", type='TextMessage')], stop_reason="Text 'TERMINATE' mentioned")
Última actualización