july 7

week of 7/3/2023 - 7/7/2023

Steady progress with existing tools, AI security concerns continue, and supercomputers and superintelligence loom.

what to know for now

🔌 OpenAI announces Beta mode for Code Interpreter plugin. The tool, initially tested in Alpha mode, will now be available for ChatGPT Plus users, offering a range of functions including data analysis, chart creation, and code execution. [Read more]

🔒 Adobe restricts employees' use of AI tools. In an internal email, Adobe's Chief Information Officer Cindy Stoddard instructed employees not to use personal email accounts or corporate credit cards when signing up for AI tools like ChatGPT. [Read more]

🚀 OpenAI announces general availability of GPT-4 API. Starting on Thursday, all paying API customers can access GPT-4, OpenAI's most capable model. The company also announced a deprecation plan for older models of the Completions API, recommending users to adopt the Chat Completions API. [Read more]

🐦 Elon Musk imposes Twitter post visibility limits due to AI data scraping. In response to aggressive data scraping by AI companies. These changes reflect Musk’s ongoing efforts to control how Twitter's data is used by AI technologies. [Read more]

what to know for later

💻 Inflection AI builds supercomputer with 22,000 NVIDIA H100 GPUs. The AI startup, known for its Inflection-1 AI model that powers the Pi chatbot, is constructing one of the world's largest AI-based supercomputers. The supercomputer, which will use 31 Mega-Watts of power, is expected to significantly improve the performance of the Inflection-1 model, particularly in coding tasks. Inflection AI has raised around $1.5 billion in investments and is currently valued at $4 billion. [Read more]

🧠 OpenAI aims to align superintelligence with human intent. OpenAI is focusing on the challenge of ensuring AI systems, much smarter than humans, follow human intent. The company is building a team to develop a human-level automated alignment researcher, with the goal of solving the core technical challenges of superintelligence alignment in four years. This effort is in addition to existing work at OpenAI aimed at improving the safety of current models and mitigating other risks from AI. [Read more]

📬 Want weekly updates delivered to your inbox? Subscribe to the Handy AI newsletter here.