- AI Jungle Guide
- Posts
- AI-Jungle Guide #1
AI-Jungle Guide #1
Artificial "General" Intelligence / Attack of the Claudes / Ilya be back
Welcome to the first issue of the AI-Jungle Guide. Since this newsletter just hit double digit subscribers, we’re all still on a first name basis! So I’m very eager for your feedback and how I can improve this newsletter in the coming issues!

This image was generated with leonardo.ai. Try it, it’s good & free.
News:
OpenAI’s Artificial “General” Intelligence:
OpenAI is putting the “General” in Artificial General Intelligence (AGI). How did they do that? Turns out it’s easy: you hire one!
US Army General and former NSA Director Paul Nakasone is now an OpenAI board member. [1]
You might have some concerns about a former US military general being involved with the future of AI and you’re not alone!
Edward Snowden says: "They've gone full mask off: do not ever trust OpenAI or its products”. [1]Attack of the Claudes: Anthropic’s New AI Model:
If you agree with Snowden, there is a new hope: Anthropic (the other AI company) just released Claude 3.5 Sonnet, an AI model that beats OpenAI’s newest GPT-4o model in almost all benchmarks handpicked by it’s maker (But you don’t have to take their word for it, you can try the new Claude yourself at claude.ai). [2]
Another update by Anthropic is the artifact-feature. Artifacts are text documents generated by Claude that are displayed next to your chat. So you can generate multiple files easily and iterate on them (for example code or websites). [3]Ilya be back: Ensuring Safe Superintelligence
And if you’re worried that AGI might overthrow humanity soon, then you’re probably relieved to know that Ilya Sutskever started a new company called “Safe Superintelligence Inc.” which has only one goal: to make a safe superintelligence. After his failed attempt to fire OpenAI CEO Sam Altman, Ilya left OpenAI where he was Chief Scientist and is now on a mission to make AGI safe with his new company. [4]

At least humans have an edge on being funny (for now).
Background:
What is Artificial General Intelligence?
Artificial General Intelligence (AGI) is sometimes also called superintelligence. It refers to an AI that is equal or better than humans in everything. Some people are afraid of AGI happening, because they assume that an AI that is more intelligent than humans could easily “escape” and overthrow humanity (just like judgement day in the Terminator movies). I’m a bit sceptical myself: Take a look at humans, for example Donald Trump and Stephen Hawking. Trump came a lot closer to world domination and is a lot more dangerous, while Hawking was clearly in the lead regarding brain power. But maybe the real danger is who controls AGI when it is developed, and there Trump is a lot closer than Hawking (should he win the next election and also by virtue of being still alive).
What is Anthropic and who is this Claude guy?
Anthropic is an US-based AI startup, which was founded by former members of OpenAI in 2021.
It builds large language models (LLMs), similar to OpenAIs GPT, that are called Claude (so Claude is really an american, zut alors!). Amazon just invested $4 billion in 2023 (and Google another $2 billion).
And who wouldn’t trust Jeff Bezos, right? [5]Who is Ilya Sutskever?
Ilya is one of the important figures in AI. He is a researcher and was one of the co-founders of OpenAI. He also fired OpenAI’s CEO Sam Altman last year, or at least he tried to (you probably heard of the drama), because Ilya is a strong believer in AGI and concerned about safety for humanity, should AGI be developed in the near future.
Sam Altman on the other hand seems more concerned about the making-money side of AI.So after the unsuccessful mutiny, Ilya left OpenAI and started a company that tries to build superintelligence (AGI) in a safe way, whatever that means (maybe just that Sam Altman and his general are not involved). [6]

Why wasn’t Claude named Robotierre?[7] Why is leonardo.ai funnier than Claude?
Try it yourself:
Hire a General! - While not practical advice for most, it’s a notable strategy by OpenAI.
Try Claude: Explore Claude at claude.ai and see if it meets your needs as well as ChatGPT. It’s free, and you can experiment with the new artifacts feature, which provides generated code and text snippets alongside your chat. It’s pretty cool!
Watch for Developments from Safe Superintelligence Inc.: Although there’s nothing to try from Ilya’s new company and I doubt there will be anytime soon, as they are not targeting end-users. I’m not sure what and whom they are targeting at all to be honest.
Spread the Word!
Learned something new? Please do me a favour and forward this newsletter to someone else who could benefit from it! Or invite them to subscribe at: https://aijungle.guide!
[1] https://futurism.com/the-byte/snowden-openai-calculated-betrayal
[2] https://www.anthropic.com/news/claude-3-5-sonnet
[3] https://support.anthropic.com/en/articles/9487310-what-are-artifacts-and-how-do-i-use-them
[4] https://x.com/ssi/status/1803472825476587910
[5] https://en.wikipedia.org/wiki/Anthropic
[6] https://en.wikipedia.org/wiki/Ilya_Sutskever
[7] https://de.wikipedia.org/wiki/Maximilien_de_Robespierre