AI Assistants: Picking the Right Copilot

AI tools that aid programming development abound, but each has strengths and weaknesses to consider, especially when planning apps and data models in an IDE. Here's an overview of solutions available.

Pierre DeBois

August 24, 2024

7 Min Read
man using AI to help him with his work on a laptop
Alamy

The excitement generated by AI will certainly continue through the rest of 2024. This year is also unveiling real-life use cases where AI serves as an agent. This means professionals of all ilk will be deciding what kinds of AI assistants they should use to help with their tasks.

AI assistants are being applied in those use cases, especially for app and software development. Picking the right one will not be easy, as solution providers have different takes on the ideal AI assistant.

How AI Became the Copilot

The most prominent benefit AI offers is its ability to consolidate information and present it in a relatable way. This is the essence of what a good tech agent does. For example, Google added GSE, an AI enhancement to its venerable search engine that lists an overview of the results on a search engine results page (SERP). Search engines often produce an overwhelming list of choices in their results, and people tend to click on the first page of the search results, assuming the top results are the best ones to provide them with what they need. AI assistants such as Google's GSE and Bard, in contrast, return a more conversational result, enabling what feels like a more natural interaction than reviewing links and metadata.

The first major AI assistants that appeared on the market were the plug-ins in ChatGPT and Bard. These were just extensions of the current solutions, with Bard featuring extensions that came primarily for Google-related services.

Related:Software Developer Use of AI Grows as Optimism in IT Hiring Persists

More intriguing were the companies that developed plug-in extensions for ChatGPT. These extended the discoverability and use case examples for OpenAI's tool.

The plug-in that has gotten the most attention among data professionals is Advanced Data Analysis (ADA), the OpenAI-developed extension for ChatGPT Plus users. Originally called Code Interpreter, ADA provides data cleaning and statistical calculations that make data exploration easier without requiring programming syntax.

Interest in plug-ins and extensions set the marketplace environment for AI assistant adoption. AI assistants differ from plug-ins in the sources of data that a generative AI platform uses to generate a response to a prompt. GenAI platforms generate prompt responses that look like the elements contained in its training corpus. So, while a plug-in is using the same corpus that ChatGPT relies upon, an AI assistant is usually trained on a different corpus that is more suited to a specific task.

A good AI assistant example is GitHub Copilot. Created using the GPT-3 model first used in ChatGPT, GitHub Copilot analyzes programming syntax and script comments to create real-time suggestions for the next line of code in a given script. The result is a form of pair-programming behavior that makes programming and debugging faster.  

Related:GitHub Copilot vs. ChatGPT: Which Tool Is Better for Software Development?

Other assistants have been launched, all in response to the speed of marketplace changes and to the GitHub and OpenAI launches. Jupyter, for example, launched Jupyter AI, an assistant within the Jupyter Notebook and JupyterLab. Like GitHub Copilot, Jupyter AI use genAI as an assistant within an interface — in this case, within the project notebook.

Jupyter Notebooks have long been a popular means to share project code and supporting media, so Jupyter AI places assistant and chat features within the Jupyter toolchain.

Another GitHub Copilot competitor is Tabnine, a subscription AI assistant extension that also performs code completion and syntax recommendations. Like Jupyter, Tabnine includes a chat agent to prompt suggestions within the IDE, allowing developers to ask and receive personalized solutions to the given project. Besides its availability in Visual Studio Code, Tabnine is available for other IDEs developers use, such as Sublime and PyCharm.  

Amazon SageMaker was originally introduced as service for building, training, and deploying machine learning models. Its peripheral services offer developers and programmers a comprehensive environment for data exploration and machine learning project support. It added its own genAI Copilot to unify recommendations based on activity across those services and make it easier for the developers to create code.

Not all assistants are meant for tech professionals. Others with a focus on consumer benefits are emerging. For example, Amazon has introduced an AI tool for answering shopper questions, consolidating the vast pages on the Amazon site, and saving customers time in making a purchase decision. A similar AI agent comes from Costco. Costco announced it is installing AI scanners in its retail stores that review customer carts at checkout. The scanners eliminate the need to have someone physically check the receipts against the cart items at the store exit and eliminates bottlenecks when customers leave a Costco store.

What Makes a Good AI Assistant?

So, what should technical professionals look for in an AI assistant? The best assistant operates as an agent that understands what context the underlying AI can assume from its known environment.

IDE assistants such as GitHub Copilot know that they are responding with programming projects in mind. GitHub Copilot examines script comments as well as syntax in a given script before crafting a suggestion. The tool examines syntax and comments against its trained datasets, consisting of GPT training and the codebase of GitHub's public repositories. GitHub Copilot was trained on the public repositories in GitHub, so it has a slightly different "perspective" on syntax than that of ChatGPT ADA. Thus, the choice of corpus for an AI model can influence what answer an AI assistant yields to users.

A good AI assistant should offer a responsive chat feature to indicate its understanding of its environment. Jupyter, Tabnine, and Copilot all offer a native chat UI for the user. The chat experience influences how well a professional feels the AI assistant is working. How well it interprets prompts and how accurate the suggestions are all start with the conversational assistant experience, so technical professionals should note their experiences to see which assistant works best for their projects.

Professionals should also consider the frequency of the work in which the AI assistant is being applied. The frequency can indicate the degree of value being created — more frequency gives an AI assistant an opportunity to learn user preferences and past account history, which plays into its recommendations. The result is better productivity with AI, learning quickly where to best explore and experiment with crafting applications.

Considering solution frequency can also reveal the cost of the technology against the value received. While many solutions have a nominal subscription fee, some have increased the price significantly when introducing an AI feature. This cost filters down to the user. So a user should note if productivity is changing; otherwise, the cost of an AI assistant might appear to be an unnecessary expense.

The Rise of BYO-AI

You will likely hear about these tools alongside other AI assistants as part of an emerging tech trend among professionals — bring your own artificial intelligence (BYO-AI). This trend is a behavior in which people learn how to integrate their personal AI assistants, be it a self-crafted tool or a purchased service, into their workflow.

A number of marketplace introductions will accelerate the spread of these agents. The assistants designed for IDEs are a clear example, but developers are usually on the forefront of technology adoption. Thus, we are seeing these agents expand to other industries as other professionals learn to adopt AI in their workflow.

One clear influence is OpenAI's launch of custom GPTs and the GPT store. Custom GPTs are agents that users can create for private or public use. The GPT builder allows users to create the agents; users select how ChatGPT operates against a set of instructions and assumptions, forming an agent that can be used for personal requests (private) or delivering a product based on elaborate tasks (public). Public custom GPTs can be sold in the OpenAI marketplace.

Through solutions like Custom GPT, business professionals will develop assistants that will complement the workplace apps and software that they use daily. The result is AI-influenced agents being tailored to personal needs. People will bring these agents into their workflow, creating new use cases for AI.

In the past year, the AI buzz dominated business and tech news. Going forward, that buzz will reveal how AI assistants bring enriched work solutions to life, be it on processes for customers or for software developed by a team.

About the Author

Pierre DeBois

Pierre DeBois is the founder of Zimana, a small business analytics consultancy that reviews data from Web analytics and social media dashboard solutions, then provides recommendations and Web development action that improves marketing strategy and business profitability. He has conducted analysis for various small businesses and has also provided his business and engineering acumen at various corporations such as Ford Motor Co.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like