Artificial General Intelligence and beyond: do we really know what’s going on?



Artificial General Intelligence and beyond: do we really know what’s going on?



Artificial General Intelligence and beyond: do we really know what’s going on?

Created on 2024-06-10 21:45

Published on 2024-06-15 09:10

„AI systems don’t need to be able to do everything to show explosive growth – they just need to automate AI research and development.“ – Leopold Aschenbrenner

Leopold Aschebrenner used to be research team mate of Ilya Sutskever and Jan Leike – both prominent Ex-OpenAI employees who were working on making AI and it’s future development safer. All three of them had their doubts about the way the company treated their way forward when approaching a future-save development of AI. Word has it, Aschenbrenner wrote an internal memo about the safety issues he saw with developing save AI and confront the board with it. He may also have shared the memo externally which gave management an opportunity to let go of Aschenbrenner. That’s my understanding of how things went down.

Why is this important? Because it completes a picture that is slowly developing of a company that seems to put product development above safety. I am not saying it is, but it sure looks like it.

Anyhow. Aschenbrenner, a German emigrant to the US, wrote an incredible text about 2 weeks ago, titled:

Situational Awareness„.


A wake-up call

In his report, Aschenbrenner paints a dramatic picture of the next 5-10 years. According to his assessment, we are on the verge of the development of artificial super-intelligence (ASI), which will far surpass human intellect. As early as 2025/26, AI systems will outperform many university graduates, and by 2030 they will be smarter than all of us.

Aschenbrenner is convinced that the world has not even begun to realise what is in store for us in the coming years. While the computing power of AI clusters is growing exponentially and the industry is investing trillions, experts are still discussing „hype“ and „business as usual“. But soon, Aschenbrenner hopes, the world will wake up – hopefully not too late.


A race for the future of humanity

Aschenbrenner sees the USA in a race with China to develop ASI. In his opinion, the outcome of this race will determine the future of mankind. Whoever develops ASI first will have unprecedented power. Aschenbrenner believes it is all the more important for the free world to win this race.

But getting there is not easy. Aschenbrenner warns urgently of the risks of a faulty or malicious ASI. If we don’t manage to perfectly align AI systems with our human values (this is what AI experts call „alignment“), it could mean the end of humanity. However, he is optimistic about the technical feasibility of „alignment“.


What you can do today

– Find out about the rapid developments in the field of AI and form your own informed opinion. Talk about it with friends and family. Get the word out.

– Support initiatives and researchers working on the safe development of ASI.

– Advocate for international cooperation and regulation to prevent a dangerous AI race.

– Use AI systems today to expand your knowledge and become more productive while having a keen eye on its ramifications for those who can/do not want to participate.

– No doom & gloom. Remain optimistic: ASI offers enormous opportunities if we control the risks.


Top Links

Anthropic Claude (https://www.anthropic.com): Advanced AI assistance system with a focus on safety.

Aschenbrenner’s Vita (https://www.linkedin.com/in/leopold-aschenbrenner): His LinkedIn Profile.

Homepage (https://www.forourposterity.com/): Aschenbrenner’s homepage with additional interesting Blog Entries.

OpenAI Superalignment (https://openai.com/superalignment/): The current research team at OpenAI working towards their „goal /…/ to solve the core technical challenges of superintelligence alignment by 2027„.


I will end with a quote from Stuart Russell, one of the leading AI researchers:

„When it comes to a super-intelligent AI that can change itself, we probably only have one chance to create the right starting conditions“.

Let’s work together to ensure that the development of ASI will be a blessing not a curse – for all of us.

As for me, I will definitely read, listen and watch more into the topic and keep on Aschenbrenner’s toes – these are fundamental developments no one should miss out on if you really want to understand what is currently going on.

Arno Selhorst


Dive deeper

[1] Ex-OpenAI Employee Reveals terrifying Future of AI

[2] Leopold Aschenbrenner on LinkedIn

[3] Blog article evaluation „Situational Awareness“

[4] Another assessment of the paper

[5] Aschenbrenner’s Homepage

[6] Quotes from Leopold Aschenbrenner’s Situational Awareness Paper

[7] https://situational-awareness.ai

[8] Leopold Aschenbrenner – 2027 AGI, China/US Super-Intelligence Race, & The Return of History


Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert