ChatGPT Could Make Democracy Even More Messy

There is plenty of speculation on how ChatGPT may revolutionize education, software, and journalism, but less about how it will affect the machinery of government.

Bloomberg News

December 6, 2022

4 Min Read
ChatGPT Could Make Democracy Even More Messy
Alamy

(Bloomberg Opinion/Tyler Cowen) -- ChatGPT is an internet sensation, with its ability to provide intelligent and coherent answers to a wide variety of queries. There is plenty of speculation on how it may revolutionize education, software and journalism, but less about how it will affect the machinery of government. The effects are likely to be far-ranging.

Consider the regulatory process. In the US, there is typically a comment period before many new regulations take effect. To date, it has been presumed that human beings are making the comments. Yet by mobilizing ChatGPT, it is possible for interested parties to flood the system. There is no law against using software to aid in the production of public comments, or legal documents for that matter, and if need be a human could always add some modest changes.

ChatGPT seems to do best when there is a wide range of relevant and available texts to train from. In this regard, the law is a nearly an ideal subject. So it would not surprise me if the comment process, within the span of a year, is broken. Yet how exactly are governments supposed to keep out software-generated content?

Stack Overflow, a software forum, already has already banned ChatGPT content because it has led to an unmanageable surfeit of material. The question is whether that ban can be enforced.

Related:5 Ways to Prevent AI Bias

Of course regulatory comments are hardly the only vulnerable point in the US political system. ChatGPT can easily write a letter or email to a member of Congress praising or complaining about a particular policy, and that letter will be at least as good as what many constituents would write, arguably even better. Over time, interest groups will employ ChatGPT, and they will flood the political system with artificial but intelligent content.

So how will politicians tell which policies their constituents are genuinely interested in? Perhaps the personal visit to a congressional office will matter all the more. Legislative staff might disregard or at least discount their mail and email, and that will affect how political systems processes voter information. Telephone polls, with all their imperfections, might become more important too, at least until a future GPT gains the ability to talk on the phone.

Congressional staff also uses the internet to read policy analyses and gauge public opinion — and that will also have to change. ChatGPT can be used for so-called search-engine optimization, for instance by creating and writing artificial blogs, which then will link to reach others. Online manipulation is hardly a new problem, but it will soon be increasingly difficult to distinguish between machine- and human-generated ideas. And remember: ChatGPT is improving all the time. "Let's see what the bloggers have to say" won’t be so easy anymore.

Keep in mind all this is different from the classic problems of misinformation. I’ve been playing around with ChatGPT since last week, and my overall impression is that it is considerably more objective than most humans, including when it comes to economic analysis.

Of course other AI innovations might evolve as countermeasures. AI could be used to rank letters, emails, comments and websites by their degree of human content, though it will not be easy. (Just one example of the kinds of questions it will raise: Should software-generated content count for zero?)

I am not pessimistic about the rise of ChatGPT and related AI. Nor do I think it will ruin our government, any more than the post office or polling or the internet did. Legislators may well be able to communicate more clearly and accurately than before, if they so choose.

All that said, this technology will require a rethinking of one of the essential features of a democracy: how to measure public opinion.

For what it’s worth, I asked ChatGPT to write a dialogue around the core question of this piece — how ChatGPT will affect the operation of government. Its answer follows.

Person 2: Well, I think it has the potential to be quite useful in a number of ways. For example, it could be used to automate the analysis of large amounts of text-based data, such as legislation or policy documents.

Person 1: That’s true. And by providing more efficient and effective analysis, it could help government officials make more informed decisions.

Person 2: Exactly. And it could also be used to generate human-like text, which could be useful for things like drafting speeches or responding to constituents.

Person 1: Definitely. But I think it’s important to remember that ChatGPT is just a tool, and how it is used and what impact it has on government operations will ultimately depend on the decisions and actions of individuals and organizations.

Person 2: Agreed. It’s up to us to determine how we use technology like ChatGPT and make sure it is being used in a responsible and ethical manner.

I couldn’t have said it better myself.

About the Author

Bloomberg News

The latest technology news from Bloomberg.

Sign up for the ITPro Today newsletter
Stay on top of the IT universe with commentary, news analysis, how-to's, and tips delivered to your inbox daily.

You May Also Like