Hey Everyone,

When I glance over at my articles on OpenAI in recent times, I have to admit it’s not looking good. The “generational AI” startup approaching $3.5 Billion in revenue, led by Sam Altman it turns out, isn’t even a secure entity.

LinkedIn News thought they would immortalize my off the cuff comment1 on this dismal disclosure.

Subscribe now

What is the point of being a closed source LLM startup, if you don’t prioritize security, safety and protect your trade secrets from IP theft?

It’s clear in mid 2024, OpenAI hasn’t historically cared about security, cyber security or AI trust and safety to the extent that they were pretending they do.

Unfortunately with the exodus of their entire Superalignment team, things have gotten worse, not better. When they put a former NSA boss on their board, they immediately pulled out of China. China, who were supposedly two year to nine months behind America in LLMs? Not any more! Attention, this is no longer true any longer in mid 2024. ChatGPT maker OpenAI is planning to block access to technology used to build AI products for entities in China and some other countries, Chinese state-owned newspaper Securities Times reported on Tuesday. June 23rd. It was finally revealed that:

A hacker gained access to the internal messaging systems of OpenAI early last year, and stole details about its technology, The New York Times reports, citing anonymous sources. The company did not publicize the news or inform law enforcement because no customer or partner information was stolen, the sources said.

The hacker accessed some employee discussions, but did not get into the systems where it builds its artificial intelligence, per the report. Meanwhile, OpenAI recently updated its ChatGPT app for Apple’s Mac operating system to fix a security flaw, The Verge reports. Until Friday, conversations were stored in plain text. (I was on Threads when this random person discovered this!).

The details:

So why is OpenAI’s conduct around privacy, cybersecurity and China getting a bit weird?

The breach occurred in early 2023, with the hacker accessing an online forum where employees discussed OpenAI’s latest tech advances.

While core AI systems and customer data weren’t compromised, internal discussions about AI designs were exposed.

OpenAI informed employees and the board in April 2023, but did not disclose the incident publicly or to law enforcement.

Former researcher Leopold Aschenbrenner (later fired for allegedly leaking sensitive info) criticized OpenAI’s security in a memo following the hack.

OpenAI has since established a Safety and Security Committee, including the addition of former NSA head Paul Nakasone, to address future risks.

Leopold might want to profit from AGI himself with his new Venture Fund, but the kid was right!

OpenAI’s track record in cybersecurity is so bad if I was an Enterprise firm I’d have difficulty trusting them. All that valuable IP and paying a median compensation in United States package totals of $860K. You’d think they could do better? All those shiny products have a cost it seems.

Big on product but lax on privacy and security, anything shiny yet? The lack of DAUs using ChatGPT is stunning. How about storing the conversations of users in plain text? The specific security flaw was first exposed on July 2, when engineer Pedro José Pereira Vieito pointed out on Twitter/X that the Mac version of ChatGPT was storing the conversations of users in plain text, rather than encrypting them, meaning hackers could read them with no effort.

This is not how you run a generational AI startups with $13 Bn. in funding just from Microsoft. This is how monopolies go soft. Too much privilege and a cushy package.

After The Verge alerted OpenAI, the company released an update to encrypt the chats. They needed random people on Threads to alert them to this that it was not a great idea?

If China is good to great at IP theft, where do suppose their tricks and “closed-source” LLM trade IP rests? ChatGPT is not available in mainland China but many Chinese startups have been able to access OpenAI’s application programming interface (API) platform and use it to build their own application. This is finally set to change. It took someone from the heading the NSA formerly on their board to alert them to this threat? I really don’t get it! If this is how much they value cybersecurity, how much do you suppose they value user privacy?

Microsoft’s own conduct with its Recall product that takes pictures of your screen every five seconds, that was originally opt-in by default, shows the level of security we are getting in the Copilot Era. These aren’t people who value privacy or National security. Microsoft keeps failing at cybersecurity so badly, they had to tether executive pay to it even. It’s not a normal situation.

We just have to assume that China now knows what OpenAI knows or knew up until April, 2023. China’s recent mysterious progress in LLMs would also suggest this to be the case. Is OpenAI’s secret sauce out in the wild? As other players continue to even the playing field in the AI race, it’s fair to wonder if leaks and hacks have played a role in the development. The report also adds new intrigue to Aschenbrenner’s firing — who has been adamant that his release was politically motivated. But that his VC firms wants to cash out on AGI, we cannot consider him a good actor either. There’s clearly a lot of corruption and it makes me wonder how much I should trust ChatGPT?

OpenAI executives revealed the incident to employees during an all-hands meeting at the company’s San Francisco offices in April 2023 and informed its board of directors, according to the two people, so why was this never publically disclosed until more than a year later? If this was a public company, this would be a huge scandal.

OpenAI hasn’t just lost its voice in 2024, its lost its credibility and status as a trustworthy frontier model builder to Anthropic and Claude 3.5. Even highly respected ML researchers are making the switch from ChatGPT to Claude 3.5. OpenAI doesn’t have just have constant PR issues and internal dramas, it has a declining product compared to its main and direct competition. Many ML researchers are rumored to be switching from ChatGPT to Claude 3.5.

The Move to Claude 3.5 from ChatGPT has Begun

I’m growing uncomfortable with OpenAI’s leadership. I believe if this really is a generational AI company, that they should have a professional CEO that knows what they are doing.

If you are the leading frontier model in the world, you essentially have the best world-class talent in the world. That you would be endangering National Security is not tolerable. The Board has to act. But even Microsoft has a growing inability to do the right thing, because you know, they all have dollars in their eyes. 🤑 But some things really are more important than money.

Time and time again OpenAI has shown its true colors. Hackers have hacked away any perception of security around the latest AI code. We are at a crossroads of the value of closed-models that pretend or promise more than they can deliver.

Chinese users of the platform (OpenAI) have received emails warning they are in a “region that OpenAI does not currently support” and that additional measures to block API traffic from some regions would be taken starting July 9. In America’s attempt to keep China behind them, what are they prepared to do?

While OpenAI’s systems, where the company keeps its training data, algorithms, results, and customer data, were not compromised in the 2023 hack, some sensitive information was exposed.

Leopold Aschenbrenner, a technical program manager at OpenAI, criticized the company’s security measures, suggesting they were inadequate to prevent foreign adversaries from accessing sensitive information. You should check out his long interview on YouTube with . Of course his understanding of China in AI is rusty at best.

All of this just rubs me the wrong way and makes me further hesitant to buy into their AGI hype and AI-washing narrative. Of course my baseline skepticism is perhaps higher than all of their PR campaigns on Twitter, LinkedIn and social media. In truth, OpenAI’s success generating revenue with ChatGPT and their API has likely stifled American innovation meaning less AI startups will even survive. That’s monopoly capitalism for you, Microsoft should be under investigation.

This isn’t how the free market or “democratizing AI” is even supposed to work. There is supposed to be a level playing field.

Anthropic CEO Dario Amodei said in the In Good Company podcast that AI models in development today can cost up to $1 billion to train. Current models like ChatGPT-4o only cost about $100 million, but he expects the cost of training these models to go up to $10 or even $100 billion in as little as three years from now.

It’s all so hard to believe! Even with diminishing returns and rate of improvements of LLMs today, with BigTech likely wasting capital investing in huge AI datacenters, why should the U.S. boast the leading frontier models at the exclusion of China and the rest of the world? Already European LLM makers aren’t able to keep up. This means Europe will be significantly behind in Generative AI, if it isn’t already in 2024.

OpenAI seems to have inherited Microsoft’s weak security culture. China’s foreign IP theft teams have likely made their closed-source nature of their business obsolete. Chinese Generative AI startups and open-source models display weird and surprising capabilities in 2024, even while rationing compute. How can this be?

In April, 2024 a government review board described a hack of Microsoft last summer attributed to China as “preventable.” In April too eh? You don’t say. The U.S. Department of Homeland Security’s Cyber Safety Review Board pointed to “a cascade of errors” and a corporate culture at Microsoft “that deprioritized enterprise security investments and rigorous risk management.”

The plain text Mac debacle was even stranger than fiction.

Shows you how much they care about their customers and their privacy. Until Friday, June 28th, OpenAI’s recently launched ChatGPT macOS app had a potentially worrying security issue: it wasn’t hard to find your chats stored on your computer and read them in plain text. Apple is known for attention to privacy, so it’s very bad PR for both OpenAI and Apple.

As demonstrated by Pedro José Pereira Vieito on Threads, the ease of access meant it was possible to have another app access those files and show you the text of your conversations right after they happened.

The Verge reporter asked Pereira Vieito how he discovered the original issue. “I was curious about why [OpenAI] opted out of using the app sandbox protections and ended up checking where they stored the app data,” he said.

Do not Trust OpenAI

Back in 2023, it was reported that ChatGPT would require more than 30,000 GPUs, with Sam Altman confirming that ChatGPT-4 cost $100 million to train. Last year, over 3.8 million GPUs were delivered to data centers. With Nvidia’s latest B200 AI chip costing around $30,000 to $40,000, we can surmise that Dario’s billion-dollar estimate is on track for 2024.

So what will 2027 or 2028 feel like? The compute and stakes will be so much higher. Can we trust actors like OpenAI to lead us into a brave new world? No wonder Nvidia and OpenAI are making most of the profits, even Nvidia can make a lot of money in China even with the current AI chip bans. Nvidia is forecast to make $12 billion from selling GPUs into China this year despite US trade restrictions aimed at curbing Beijing’s AI ambitions.

Albeit it’s a bit of napkin math for sure: According to the latest data from research firm SemiAnalysis, Nvidia is set to ship over 1 million of its new H20 product to the Chinese market, and with each one said to cost between $12,000 and $13,000 apiece, that would deliver over $12 billion to the company’s coffers.

China is highly likely to use Generative AI to build apps that are the equivalent of what we have in TikTok or Temu today. If they aren’t trojan horses for Chinese interests, it’s not clear what they are. Outside of OpenAI’s app, it’s also clear the U.S. is fairly bad at building Generative AI apps and products. That’s not a good sign for U.S. competition with China.

OpenAI haven’t been able to build products that even come close to replicating the euphoria of ChatGPT. While their revenue growth has been impressive, it’s detrimental not only to other U.S. startups, but the entire Sovereign AI movement of firms like Aleph Alpha in Germany, Cohere in Canada, AI21 Labs in Israel and so forth. It’s also beginning to hurt Cloud software companies in the U.S. and hurt Microsoft’s own viability to be a leader in AI in the future. Microsoft spawned an even more dangerous company to national security than itself, simply out of greed to get their hands on the tech before Google and other rivals.

In this light Elon Musk’s xAI might even be benevolent compared to what OpenAI has become and will become. The monster Series B by xAI ($6 Bn.) now complicates everything including the Big Four in Frontier models:

OpenAI

Google

Anthropic

xAI

In the 2025 to 2030 period the AI arms race runs the risk of provoking global inter-state conflict. This is because the stakes are so high in combining Generative AI with military technology among other things like, you guessed it, cybersecurity and the protection of critical infrastructure like energy grids.

Even as Google and Microsoft lose their ESG status, with ballooning carbon footprints (against all they had promised), the profit motive at BigTech is clouding the future and geopolitical saftey of humanity and AI’s future of technology.

If Nvidia is the hero of compute and profitability in 2024, OpenAI is a very dark anti-hero company. It’s also showing American leadership has no idea what it is doing in AI and the risks involved.

Generative AI Research Labs are Ripe for Security Breaches, IP Esponionage

If OpenAI has been this lax historically, they have already been hacked and are likely not even aware of it. If you don’t even disclose these things to the authorities, China knows way more than you are letting on. Leo was quoted for the record, in saying that China is just three months behind the U.S. in actuality.

Leo is awfully jolly for a kid warning about OpenAI’s lack of security. What does he know that we don’t? That’s right, we haven’t been told. We don’t actually have a right to know.

Read more OpenAI

Watch: …AI Labs are extremely vulnerable…..2

Conclusion on OpenAI’s Security

The fact that OpenAI doesn’t disclose when there is a cybersecurity breach points to a culture where a lot of things are going wrong internally we don’t hear about and may never hear about. That’s not the kind of leadership you want handling your data or as your frontier model of choice.

With the West becoming a protectionistic state, a sudden cut of access to (OpenAI) API would definitely pose some challenge to many startups and corporations in China right now. China appears to be doing fine creating its own open-source LLM model with hundreds of teams working on LLMs and thanks to Meta’s Llama models.

OpenAI had the responsibility to disclose the “major security incident” in 2023, but did not. They chose to remain silent yet again about a significant event inside of OpenAI. Instead of doing the thing that any reasonable Board would do, 14 months later OpenAI instead notified its users in China this year that they would be blocked from using its tools and services from 9 July, 2024 following the appointment of the former NSA director to their board.

For the record the 2023 hacker did not get access to internal systems, models in progress, secret roadmaps or customer data, as far as we know just employee discussions that did include sensitive data. There’s no question today that OpenAI cut corners in obtaining training data for their models, TechCrunch reporting that OpenAI reportedly (Insider) used questionably legal sources like copyrighted books in their training data, a practice they claim to have given up.

In the AI Arms race and fight for AI Supremacy OpenAI is now trying to lock down China privileges. OpenAI has not elaborated about the reason for its sudden decision. ChatGPT is already blocked in China by the government’s firewall, but until this week developers could use virtual private networks to access OpenAI’s tools in order to fine-tune their own generative AI applications and benchmark their own research. Now the block is coming from the US side.

Furthermore OpenAI’s culture of blatant exaggeration on the capabilities of their models has contributed to creating AI-washing and hype that can only lead to investor losses in related stocks and market chaos when the next AI winters rolls around. These fraudulent PR tactics of AGI boosting can be seen all over social media for the past 19 months, or longer.

OpenAI is part of a machinery of American exceptionalism that is bound to hurt their allies more than it helps. Rising tensions between Washington and Beijing have prompted the US to restrict the export to China of certain advanced semiconductors that are vital for training the most cutting-edge AI technology, putting pressure on other parts of the AI industry. In spite of all of these safeguards and privileged trade bans and blacklists that U.S. has conducted, it’s still most likely to lose its dominance to China in AI.

OpenAI’s questionable methods and shady ethics are guaranteed to be a big player in that story. Along with a spectrum of impressive Generative AI startups even with limited foreign investment, China’s BAT companies are working hard on AI in their own way with less extravagance and negligence.

At the World Artificial Intelligence Conference (WAIC) in Shanghai, Chinese AI champion SenseTime on Friday released a series of updated versions of its SenseNova LLMs, including SenseNova 5.5, its latest foundational model, which the company claims has 30 per cent improved performance compared with the previous version released in April. One week ago, Baidu released Ernie 4.0 Turbo in a ‘significant upgrade’ to its AI chatbot.

Microsoft recently tried to lure members of its Microsoft Asia team to relocate and is now imposing a requirement that staff in China to use iPhones for work starting in September. The measure essentially bans Android-powered devices for these workers and is part of Microsoft’s Secure Future Initiative, designed to make sure all employees use the Microsoft Authenticator password manager and Identity Pass app, Bloomberg reported on monday July 8th.

It’s OpenAI vs. China’s ‘war of a hundred models’, who do you suppose is going to win? The U.S. clearly doesn’t know what it is doing in long-term strategy for emerging technologies, cybersecurity or counter IP theft positioning in 2024. This even as BigTech and America’s monopoly kingdoms work increasingly with the DoD, National Security and branches like the NSA. The NSA on OpenAI’s board, great that’s reassuring.

AI Supremacy
Is OpenAI an AI Surveillance Tool?
Hello Everyone, I had been fairly concerned that OpenAI was working with the Pentagon. They claimed it was on cybersecurity. OpenAI is on pace for $3.4 billion of annual revenue by the end of 2024, this roughly translates as 4x that of rival Anthropic in terms of their expected annual recurring revenue…
Read more

These are some of the reasons I grow weary of America’s approach to Generative AI and picking the winners via Cloud sponsorships like OpenAI and Anthropic. Moving fast doesn’t ensure that you will compete globally in a sustainable way. OpenAI is the poster child of pushing revenue over saftey in the Generative AI LLM space. The old board was right all along.

Thanks for reading!

2

What does Leo know about National security really? Link

Read More in  AI Supremacy