无根生

无根生

ChatGPT

QQ Screenshot 20230305000048.png
Let's talk about ChatGPT, he's been on fire for several months. When he first came out, I quickly registered an account with OpenAI. After asking him a few questions, this thing is amazing! It's so much more "intelligent" compared to the previous "artificial idiots," it's like having a conversation with a person.

What is ChatGPT?#

Let me ask him himself.

chatgpt.png

What can he do?#

dosomething.png
train.png
He is a model trained on data before 2021. He can teach you skills, give suggestions based on your questions, help you write code, and more. His content covers a wide range of topics, including history, geography, humanities, emotions, literature, and technology. Although he may not always answer accurately, with enough data, these are not problems. There are many tricks from online users. You can play the role of a XXX, a weekly report generator, a language learning master, a translation expert that beats Deepl, and after integrating TTS functionality, you can directly outperform Siri, Xiaoice, and Tmall Genie.

shanghai.png

Due to ChatGPT's outstanding performance, its backer Microsoft announced that they will integrate ChatGPT into their search engine, allowing users to directly obtain results through chat. Think about it, isn't this the most natural scenario? I ask a question, and you provide an answer. No need to individually distinguish which result is an ad, which one is nonsense, and which one is helpful. This is probably what a search engine should look like.

Improving efficiency#

As a developer, the biggest help ChatGPT has brought me is in writing code. I can describe my requirements to him and let him write an outline. Then I can make adjustments based on his code, and quickly get the desired functionality. If there's anything I don't understand, I can directly ask him. Under the "ask what you want" situation, I can quickly grasp new skills. In the future, it really will be "programming with ChatGPT."

Evolution#

When I was halfway through writing this article, GPT-4 was released. While I was still contemplating this article, it "upgraded" again. It has only been a little over 100 days since the first release of ChatGPT, and during that time, there was an update to GPT-3.5-Turbo just over 10 days ago. BingChat is using the GPT-4 model, and the OpenAI team probably trained the model a long time ago. This period of time should have been spent on content review and bug fixing. "We spent six months making GPT-4 safer and more consistent. In our internal evaluations, compared to GPT-3.5, GPT-4 is 82% less likely to produce responses that are disallowed by the content policy, and 40% more likely to produce factually correct responses." That's what the official statement says.

GPT4 introduction page: https://openai.com/product/gpt-4

You could say that this is the moment when the iPhone was among a crowd of feature phones. It's the beginning of general artificial intelligence. For example, in the past, if you wanted to identify spam messages on your phone, you had to organize a bunch of well-labeled materials, write a neural network model, feed it with data, and get a model with decent accuracy. Every now and then, you would train the neural network with new labeled data to get an updated model. It seems like that's no longer necessary. Just write a prompt for ChatGPT, and it can directly distinguish whether it's spam or not. It even has this skill without any specialized training. The only thing to consider in the future is the price of API calls. Although OpenAI has significantly reduced the price of API calls, making it affordable for personal use, it may not be suitable for large-scale integration into one's own services. Providing context for each conversation consumes tokens, and users have a limited budget. Hopefully, it will become as cheap as electricity and tap water. By then, "everything can be AI."

GPT-3.5-Turbo Pricing

  • It’s priced at $0.002 per 1K tokens, which is 10x cheaper than the existing GPT-3.5 models.

GPT4 API Pricing

  • gpt-4 with an 8K context window (about 13 pages of text) will cost $0.03 per 1K prompt tokens, and $0.06 per 1K completion tokens.
  • gpt-4-32k with a 32K context window (about 52 pages of text) will cost $0.06 per 1K prompt tokens, and $0.12 per 1K completion tokens.

GPT4

GPT4 provides more powerful capabilities, but it also requires more "money" power.


This article is only halfway done, and GPT updates too quickly. I'll revise and share my thoughts later.

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.