How Running Your Code on AI Can Boost Developer Productivity
Here's how you can craft good technical prompts that support your data analysis or programming.
November 9, 2023
For years integrated development environments — better known as IDEs — have been the predominant software choice of software programmers, app developers, and data engineers. From RStudio to the venerable Visual Studio Code, they remain the go-to workhorses in a programmer's arsenal to deliver apps, websites, and software assets in business.
Now new variations of IDEs have arrived, complete with AI-based features meant to assist the coder. As AI has stoked excitement and fear in the general public, these new features have also stoked debate among developers. They have introduced new ways to assist programmers, and with it, a need to craft good technical prompts that save time and insight to get things done.
What ChatGPT and Bard Features Bring to the Developer's Table
ChatGPT Code Interpreter has added a new twist — users can add data through uploaded files, a feature unavailable in ChatGPT. To use ChatGPT Code Interpreter, users toggle a plus sign in a circle icon in the prompt window. The plug-in supports a broad number of file types and can read a file up to 100 megabytes in size, interpret the file format, and then provide an answer based on that format.
Meanwhile, Google has steadily rolled out Bard features that support specific coding needs. Since April, Google has enhanced how Bard detects computational prompts and runs code, improving Bard's capabilities with prompts addressing mathematics, coding questions, and string manipulation. Bard can handle images as well as text and code, and it can export tables to Google Sheets; separately it can export and test Python code to Google Colab and Replit. Currently, 20 programming languages are supported.
The biggest draw Bard has over ChatGPT is a capability to conduct programming tasks without paying for a subscription plug-in to access features.
The new features are a distinct improvement on a major pain point in data analysis: Analysts can explore datasets without being bogged down by syntax. Data is always considered part of the input of a prompt. When large language models (LLMs) were first launched, users were confined to using text to find information or create imagery. The new Bard features permit richer context to a prompt, leading to high-quality inputs that properly augment what is being described to the LLM.
Good Prompts Give Hope to Broader Programming Data Access
A good technical prompt includes content and input "data" that will describe the parameters for the underlying AI model. "Data" does not have to be metrics or numbers — it can be other specifications for a code or a description of the kind of data needed. The key quality of a good prompt is it describes the output with the right programming proficiency that speaks to guidelines and parameters. LLMs have the capacity to solve a wide variety of programmatic tasks using a single model.
Let's say I need a table of products sold in a department store, with synthetic data as an example for me to see what a table would potentially look like. Using a programming language, I would have to use syntax to tell the computer to create a range of random metrics. In Bard, I would create the following, using the prompt to describe the kind of data needed:
Bard-numbers
Bard returns a table and an underlying code in Python.
Bard-numbers-2
The prompt saved time in creating synthetic data for the table and having to type out the table in a programming language. If I need a version of the same code in a different language, I simply ask for the code in the desired language as a follow-up query.
Bard-R-version
LLMs enable programming described with natural language rather than starting with programming-specific language. This holds true for many languages programmers and data scientists use. The VLOOKUP, HLOOKUP, INDEX, and MATCH functions in Excel, for example, are similar to several functions in the base and additional libraries of R programming. Yet in both instances, users need some syntax familiarity to form useful calculations or accomplish tasks. Relying on AI prompts to request the output rather than syntax will broaden analysis access among professionals whose familiarity of programming languages is too limited to develop the desired output.
With Language Freedom in AI, Programming Limitations Remain
Care must be taken to not include proprietary code or data into a prompt, especially now that file uploads are possible. Doing so can be considered an IP or data privacy breach, since many agreements are crafted on the premise that explicit permission of usage or data release is required.
Also, do not accept AI-generated code without vetting it. Because LLMs are not deterministic, they make associations that they think are prompt responses, offering verbose explanations. The phrasing and methodology of results change from request to request, so a level of precision in how to approach a syntax can be lost.
For example, when I asked Bard to list the datasets associated with the ggplot2 library in R programming, it replied that it did not recognize the library, giving the standard "I can't assist" answer.
Bard-ggplot2-fail-image
When I asked again, I received the right listing.
Bard-second-try-image
Despite recent advancements, large language models may return a distorted answer among a list of items or something entirely wrong.
This is why evaluating prompts with respect to syntax remains essential. Prompt usage is ultimately a form of heuristics. Heuristics are steps leading to a solution through trial and error, or by rules that are loosely based. LLMs were developed with text calculation in mind — meaning models look statistically at what words form the best response to a prompt's directional language, seeking guidelines for what can and cannot be in the output. This means the user must provide salient details like factors in a dataset or variable assumptions for crafting a regression.
Other Programming Tools With AI
Other plug-ins have been released since the arrival of Bard and ChatGPT Code Interpreter. These will add new capabilities and options for developing prompts that aid application and software development.
GitHub Copilot, launched in October 2021, preceded much of the AI frenzy (though developers debated about how useful it was). Copilot was designed for pair-programming techniques, generating solution code from developers who provide a natural language query instead of a language syntax. Copilot offers autocomplete-style suggestions that appear as you code and can translate code between programming languages. GitHub, OpenAI, and Microsoft joint-developed the generative AI model behind it.
Stack Overflow's OverflowAI has made a big splash among developers since its introduction in July. OverflowAI is an LLM trained on a massive dataset of coding questions and answers from Stack Overflow. The tool can generate code, answer questions, and provide other coding-related assistance. The service is free, versus Copilot's $10-per-month cost.
The most intriguing aspect of OverflowAI is its developer, Stack Overflow. The platform, fearing inaccuracy of user-provided responses, announced a ban of AI-generated answers on its site last year. But the rise of LLMs quickly competed with Stack Overflow for being an information resource. OverflowAI remains under development, but it has the potential to be a valuable tool for developers because of Stack Overflow's established reputation among developers as a coding resource.
Another AI-based IDE from Google, named Project IDX, was released in August. Project IDX is browser-based development experience built on Google Cloud and uses a variation of the Palm-2 LLM called Codey. IDX offers a number of integration features that allow developers to run development assistance and seamless integration with other platforms such as Firebase.
All these AI tools offer you the chance to upskill valuable abilities, such as completing programming tasks more quickly and efficiently. However, people must remember that no tool is a suitable replacement for programming insights and expertise. The nascent age of AI today means everyone must imagine how their prompts can help them build better skills and insights.
Read more about:
Technical ExplainerAbout the Author
You May Also Like