GPT-4 has come, poor man's ChatGPT - Alpaca
It’s been a crazy week for AI.
Hey, this is AI Progress Newsletter - every week I write this email to share the most interesting AI updates, trends, opportunities, and ideas with you.
Let’s dive in
🌐 Big Tech Moves
GPT-4: People been expecting for GPT-4 for years and here it is. It’s a not just a language model. It’s a large multimodal model, which can take text, image, or both as input and gives you text output. It’s a bit disappointing it can’t output image but ok.
GPT-4 Access: There are two ways to access GPT-4. You can either subscribe to ChatGPT Plus or join the GPT4 API waitlist to wait.
Google Workplace AI: Google announces multiple AI features, including article generation for Google Docs and slide generation (I used to want to build this). It’s quite impressive “on video” but the reality is they haven’t rolled it out yet. The reason why Google’s LLMs never get popular is they do not let people to try it when they announce it. It becomes just a PR release.
Microsoft 365 Copilot: It’s a competing product to Google Workplace AI. They plug GPT-4 in every place they could, like Words, Powerpoint, Teams, etc. You not only can auto generate slides but also ask Teams to summarize your meeting based on your interest. The demo video looks very smooth.
Research Spotlight:
If you want to train your own ChatGPT for $100, Alpaca is here to help. This changes every thing because every researcher will have easy access and control to powerful instruction-following LLMs. Standford NLP team details their framework of building such small but powerful model in their blog.
Midjourney v5 is truly amazing. It can generate super realistic pictures. See how much it has evolved from v1 to v5.
Use the training run simulator to predict specific training examples. This is especially crucial when training a large model.
You can better predict 3D protein structure with DeepMind’s Alphafold 2 now!
People Talking
The AI community is not happy about why OpenAI did not disclose what data GPT-4 was training on and what architecture it’s used. They even openly admitted their approach of sharing research results was flat out wrong.
It turns out GPT-4 generally does not "understand" ordinality. I also tried this a few times. If you tell GPT-4 it’s wrong, it sometimes can correct the mistake.
NLP PhD students are having anxiety due to GPT4 as it can do mostly NLP tasks very decently. This’s made many NLP research projects irrelevant.
End Note
If you’re enjoying this report, I’d be so happy if you shared it with a friend or your Mom. You can send them here to sign up.
Have a great week!