GPT Could Be A Disaster

I was on linked in as one does last night while trying to watch fallout, after playing Helldivers 2 during the day, so maybe I'll rack some of these thoughts up to that.

One post caught my eye around how ChatGPT and the like was how people were using GPT to more "deeply" understand news stories, etc. A comment on that post was how the person was using it as a sort of harsh critique source. They've been using it to make Product Requirement Documents, but a lot of other things. I think these two uses could be relatively inconsequential. That being said, but also, I think that it could turn out to be pretty bad given the right incentives. However, I want to focus on the other thing that was noted in the comment.

Now, this person stated that they were using it as a sort of "Decision Helper". I find this to be pretty disturbing overall. They note a _problem_ and then ask for 3 - 5 options, the pros and cons of those options, and under what conditions that they should pick one option over the others. They then instruct it to ask for a couple of questions to help narrow it down so that they end up with a single option.

On the surface, this seems cool to me. Admittedly, I've asked some questions about social and technical situations as well in order to kind of scan "the landscape" for things that I might have missed, might not know about, and make a decision on whether or not the current path might be the correct way to go. _However_, I have come to find this kind of activity to be something that needs to be approached very carefully. It's very important to remember that whatever language model that you're working with has been trained carefully with a training set that you are unlikely to know a lot of details about. It's also affected by the politics and machinations of the (almost universally) corporate creators.

When you come to these models asking for options, those options are going to be adjudicated by those things. They're also going to be extremely populist opinions, or at least the different camps of "very online". Now, in general, I think the majority of our opinions are going to be relatively populist, but also consider the origin of the parameters that are used for the models and the kind of morals and ethics that guide the companies and people that have largely created them. Those things deeply affect the kind of results that you will get for options, answers, and "opinions" from the models. As such, I think we're going to have a harder push in towards whatever we consider the most extensively written about and diluted (or weird) opinions possible. The Google "black George Washington" debacle was symptomatic of the kind of "very online" kind of results.

However, there is almost a new kind of fear that comes from these kinds of actions. For instace, the recent 50 terminations from Google signals a kind of different change. DHH has written about the Death of DEI and he's definitely not out in left field about this. Many would-be allies for DEI style initiatives that I have talked to privately would never have said anything in the past for fear of being excluded from important projects/initiatives/etc., but would say that they believe more in "freedom" more than they believe in having an environment that everyone felt "safe" in. Like, if "Bob" said something offensive, you could tell Bob to shove his opinions up his ass, but you couldn't campaign with HR to try and get Bob termiated. The latter was a popular go-to for many of the organizations that I have been a part of, if things got to the point to where HR got involved. Cool people who were always focused on bringing peeps to the table seemed to never let things hit HR. They saw HR as the enemy, and it wasn't that anyone felt "unsafe" around Bob, it was that Bob sucked and needed to be told off.

I reference the above because if the tides were to turn (and I believe that they are starting to) against disadvantaged people, we might get LLMs that start to subtly suggest or be influenced by pretty shitty opinions. If we're using them as the primary sounding board for our own ideas, our products, our research, and our policies, we're going to have a Bad Time. The corporate entities that create these models will have an even more outsized influence on our lives that is even more intrusive than ever.

What Can Be Done?

This is a sticky issue. The market can't be relied upon because there is too much tied up in the training dataset and subtle biases that can (and will) be introduced. Corporations are already calling their training data sets "trade secrets", and it's true, but it's also way too important to let those sets go unexamined. Something that goes beyond national governments is necessary, but without the appropriate amount of "bite" avaialable to them, it won't be effective. What international agreements have actually been effective in causing transnational corproations from using the corporate veil and clever multi-national approaches from making big hay while ignoring the rule (if not the spirit) of the laws. On top of which, many of the policies we have are literally written by the firms that they are intended to govern. Lord knows how much ink has been spilled over the general lack of technical knowledge from many of the decision makers in government, even if their staff is competent and vocal, money talks. Loudly.

We also can't rely on the free markets to regulate themselves here. Market capture is very strong, and the nicest UI and best UX tend to win out despite other reservations. Consumers, who dictate the largest part of the market for this, don't, won't, or can't care about the effects because they don't see other reasonable alternatives. Even "innovative" companies can't compete with the big players on the LLM stage because training and inference costs are so out of the reach of those places without using the major player's models and API as to be useless.

I think in the end, it might end up having to be a question of market access. If you don't play by the rules, you don't get to play. Something along the lines of the way that mobile providers need to give new market entrants access to their networks in such a way that allows for them to compete at all. One of the challenges there, that I don't have a great answer for, is how to enforce it when the comparison falls apart a bit when mobile networks are mostly restricted by physical infrastructure. LLM access is not.

Authored by: Mal

Written: 2024-04-29