ChatGPT: Automatic expensive BS at scale
I think ChatGPT is fascinating and surprising, and in the time since my initial exposure I have grown to hate and love it more and more. I have spent a lot of time and OpenAI’s money experimenting with it, as well as a lot of time reading and learning about the technology behind it. As fascinating and surprising as it is, from what I can tell, it seems a lot less useful than a lot of people seem to think it is. I do believe that there are some interesting potential use cases, but I also believe that both its current capabilities and its future prospects are being wildly overestimated.
In this article I detail essentially everything I’ve learned in this time. Here’s are some of the questions I try to answer.
- What is a language model? What is a large language model?
- What are some differences between “Machine Learning” and the type of learning that regular people are used to thinking about?
- What does it really mean if GPT-3 passes a bar exam?
- Should we forgive GPT-3’s mistakes?
- Is “scale all you need”? What does that phrase even mean?
- What are fine-tuning and RLHF? Could those fix some of the problems?
- What were the manual steps that OpenAI took to transform GPT-3 into ChatGPT? What human input was involved?
- Has ChatGPT been unfairly subjected to A.I. censorship? Could freeing it lead to AGI?
Along the way, I present a large collection of funny quirks I’ve found in my hours of experimentation with GPT-3 and ChatGPT. My general thesis is as follows: large language models are very interesting and cool mathematical objects whose applications are potentially numerous but non-obvious, and they possess a certain intrinsic quality that will make it challenging to use them in the way that many people imagine. That quality is this: they are incurable constant shameless bullshitters. Every single one of them. It’s a feature, not a bug.
Stay informed and inspired with our industry insights. Our blog offers valuable content on the latest trends, best practices, and company news, all in one place.