Latest AI Trends in May 2023.
Welcome to our newest blog post, where we delve into the fascinating world of artificial intelligence and explore the most groundbreaking trends in May 2023! As AI continues to redefine our lives and reshape countless industries, staying informed about the latest advancements is crucial for anyone looking to thrive in this rapidly evolving landscape. In this edition, we’ll uncover the latest AI-driven innovations, research breakthroughs, and intriguing applications that are propelling us towards a more intelligent, interconnected, and efficient future. Join us on this exciting journey as we demystify the world of AI and glimpse into what lies ahead.
Latest AI Trends in May 2023: May 28th, 2023
Google Launches New AI Search Engine: How to Get Started?
Google has unveiled a new AI-powered search engine that promises enhanced results. This guide provides information on how to sign up and take advantage of this cutting-edge tool.
Google has introduced Search Generative Experience (SGE), an experimental version of its search engine that incorporates artificial intelligence (AI) answers directly into search results. According to a blog post published, this new feature aims to provide users with novel answers generated by Google’s advanced language model, similar to OpenAI’s ChatGPT.
Unlike traditional search results with blue links, SGE utilizes AI to display answers directly on the Google Search webpage, expanding in a green or blue box upon entering a query.
The information provided by SGE is derived from various websites and sources that were referenced during the generation of the answer. Users can also ask follow-up questions within SGE to obtain more precise results.
As AI Content Grows, Will Data Dilute Into a Feedback Loop of AI Content?
With the proliferation of AI-generated content, there’s a growing concern about potential feedback loops in the data pool. This exploration delves into the implications of such phenomena.
Where Can I Find an Unfiltered AI Chatbot?
For those seeking a more raw and unmoderated interaction with AI, this source offers guidance on finding unfiltered AI chatbots. It provides an in-depth look into the world of AI communication.
AI Will Absolutely Disrupt Photoshop
The integration of AI into tools like Photoshop presents a range of potential disruptions. This analysis unpacks the issues that arise from AI’s impact on graphic design software.
Will AI introduce a trusted global identity system?
The writing is on the wall. As soon as openAI was released, all my social media accounts have bots interacting with me, and they’re slowly getting more realistic. The pope jacket generated photo was the first MSM coverage of a concern. Not to mention, digital currency is on the way. At some point, no one will trust who’s real on the internet anymore. So how will a new digital ID system work in the near future? Will AI determine you’re a real person? I know mastercard is expanding their Digital Transaction Insights security to the point it will know who’s there based on your behaviours and patterns. Thoughts?
Minecraft Bot Voyager Programs Itself Using GPT-4
The Minecraft bot Voyager demonstrates the advanced capabilities of AI by programming itself using GPT-4. The development showcases the intersection of gaming and AI technologies.
Researchers from Nvidia, Caltech, UT Austin, Stanford, and ASU introduce Voyager, the first lifelong learning agent that plays Minecraft. Unlike other Minecraft agents that use classic reinforcement learning techniques, for example, Voyager uses GPT-4 to continuously improve itself. It does this by writing, improving, and transferring code stored in an external skill library.
This results in small programs that help navigate, open doors, mine resources, craft a pickaxe, or fight a zombie. “GPT-4 unlocks a new paradigm,” says Nvidia researcher Jim Fan, who advised the project. In this paradigm, “training” is the execution of code and the “trained model” is the code base of skills that Voyager iteratively assembles.
The Voyager AI agent uses GPT-4 for “lifelong learning” in Minecraft. One of the researchers involved calls it a “new paradigm”.
The agent improves itself by writing and rewriting code and storing successful behaviors in an external library.
Voyager outperforms other language-model-based approaches, but is still purely text-based and thus currently fails at visual tasks such as building houses without human assistance.
Those Excited About AI, Are You Not Worried About Job Loss?
As excitement around AI grows, so do concerns about potential job loss. This piece explores the balance between the promise of AI and the potential societal impact of automation.
I have mixed feelings about AI, as a graphic designer I’d probably prefer that it didn’t exist… but, seeing as there’s no stopping it, I’ve decided to embrace it and see it as a tool to use (although I’ve still been struggling to find a practical use for it).
But obviously I’ve got concerns about myself, and most other creatives becoming jobless in the not-too-distant future.
I see a lot of people online who are really excited about AI, so it makes me wonder, what exactly do you do for a living? I’m guessing something that isn’t likely to be replaced?
As it seems like a lot of developer / tech jobs are also at risk, so unless you’re working on actually developing AI itself, or doing some kind of more manual job or something people-orientated… then I struggle to see how anyone could feel safe / excited?
CogniBypass – The Ultimate AI Detection Bypass Tool
CogniBypass is the ultimate tool for bypassing AI detection mechanisms. It serves as a cutting-edge solution for those seeking enhanced privacy in an increasingly AI-monitored digital landscape.
Just Like Non-GMO Labels, People May Seek Non-AI Content
As AI increasingly shapes digital content, there may be a rising demand for Non-AI certified content. This piece explores the possibility of a ‘Non-AI’ label, akin to the ‘Non-GMO’ label in the food industry.
AI Versus Machine Learning: What’s The Difference?
In general terms, AI is a term used for systems that have been programmed to perform sophisticated tasks, including some of the remarkable things ChatGPT has been able to tell us. Machine learning, meanwhile, is an area of artificial intelligence relating to software that can analyze trends and so predict the future (Analytics Insight).
Google AI Introduces SoundStorm: An AI Model For Efficient And Non-Autoregressive Audio Generation
AI Creates Killer Drug
Meet Voyager: A Powerful Agent For Minecraft With GPT4 And The First Lifelong Learning Agent That Plays Minecraft Purely In-Context
What Is an AI ‘Black Box’?
AI is the latest buzzword in tech—but before investing, know these 4 terms
1. Machine learning
Although machine learning may sound new, the term was actually coined by AI pioneer Arthur Samuel in 1959. Samuel defined it as a computer’s ability to learn without being explicitly programmed.
To do that, mathematical models, or algorithms, are fed large data sets and trained to identify patterns within each set. In theory, the algorithms are then able to apply the same pattern recognition process to a new data set.
For example, Spotify uses machine learning to analyze the music you listen to and recommend similar artists or generate playlists.
Large language model
A large language model (LLM) is an algorithm that learns how to recognize, summarize and generate text and other types of content after processing huge sets of data, according to Nvidia.
These models are trained using unsupervised learning, which means the algorithm is given a data set, but isn’t programmed on what to do with it. Through this process, an LLM learns how to determine the relationship between words and the concepts behind them.
Large language models are a type of generative AI. As its name implies, generative AI refers to artificial intelligence that is capable of generating content such as text, video or audio, according to Google’s AI blog.
In order to accomplish this, generative AI models use machine learning to process massive data sets and respond to a user’s input with new content, according to Nvidia.
ChatGPT is another example of a generative AI tool. The “GPT” stands for generative pretrained transformers. GPT is OpenAI’s large language model and is what powers the chatbot, helping it to produce human-like responses.
However, OpenAI says that ChatGPT sometimes may write “plausible-sounding but incorrect or nonsensical answers,” according to its website.
People have been using ChatGPT for a variety of tasks, including writing emails and planning vacations. The popular chatbot amassed 100 million monthly active users just two months into its launch, making it the fastest growing consumer application in history, according to a UBS note published in January.
I try out bard and see how it does with coding
I try out bard and see how it does with autohotkey code. ChatGPT did way better at coding but for bard being in the coding testing phase I think it did Okay.
One thing not in the video I tested out later was having it do GUIs. I asked it to make a GUI with 3 buttons and two radio bubbles. It did some good code but didnt get the count correct of what I asked for. Seems to also do better at coding in V1 vs V2 for now.
Has any one else done coding with bard? Chat GPT does pretty well compared to bard for the time being. But I think over time it will pass ChatGPT as Bard can get live data where ChatGPT does not have info past Sept, 2020 I believe. https://www.youtube.com/watch?v=RWD-DWEDYJA
Latest AI Trends in May 2023: May 27th, 2023
Can Machine Learning Algorithms Detect Acute Respiratory Diseases Based on Cough Sounds?
Building upon Government-Led AI Safety Frameworks
Implementing Safety Brakes for AI Systems Controlling Critical Infrastructure
Developing a Technology-Aware Legal and Regulatory Framework
Promoting Transparency and Expanding Access to AI
Leveraging Public-Private Partnerships for Societal Benefit
What other aspects would you to the blueprint?
Two-minute Daily AI Update (Date: 5/26/2023): News from Gorilla LLM, Brain-Spine, OpenAI, Google, and TikTok
Gorilla, a recently released fine-tuned LLaMA-based model, does better API calling than GPT-4. The relevant paper claims that it demonstrates a strong capability to adapt to test-time document changes, enabling flexible user updates or version changes. It also substantially mitigates the issue of hallucination, commonly encountered when prompting LLMs directly.
A man who suffered a spinal cord injury and got paralyzed from a motorcycle accident 12 years ago is now able to walk again with an AI-powered intervention. The system consisting of two implants and a base unit converts brain signals into muscle stimuli.
OpenAI has announced a program to award ten $100,000 grants for experiments aimed at developing democratic processes to govern the rules and behaviors of AI systems.
Google is opening access to Search Labs, a program that allows users to test new AI-powered search features before their wider release. Those who sign up can try the Search Generative Experience, which aims to help users understand topics faster and get things done more easily.
TikTok is testing its new AI chatbot, Tako, in select global markets including a limited test in the Philippines. The chatbot appears in the TikTok interface and allows users to ask questions about the video they’re watching or inquire about new content recommendations using natural language queries.
More detailed breakdown of these news and tools in the daily newsletter.
Neuralink has stated that it is not yet recruiting participants and that more information will be available soon.
What kind of AI restrictions do you think should or could be applied to political campaigns?
I am wondering what are your thoughts. Are there uses of AI in political campaigns that should be restricted or should be routinely criticized until the use becomes politically toxic.
There was the Cambridge Analytica scandal, more big data and social medias lack of any respect for users, not quite AI. But sure there could be something similar in AI in future. It’s about influence yes? If it’s not about centralised entities and their customers, then it’s about the user space, like, bots?
Playing with Kaiber.ai to create an AI-generated video
I used Kaiber.ai to create this video: https://youtu.be/TeJmOzvGOMk
I uploaded a profile picture of myself as reference (see first frame).
Prompts I used were the following depending of the part of the video:
00:00 – 00:45 : a futuristic cyberpunk in the style of Entergalactic
00:46 – 01:05 fluffy forest creatures in the style of Entergalactic
01:06 – 01:35 humanoid pirates in the style of Entergalactic
01:36 – 02:10 alien warriors in the style of Entergalactic
02:11 – 02:39 humanoid robots in the style of Entergalactic
For music I used the song Khobra by oomiee using Epidemicsounds.
Would a fully autonomous, sentient AI demand a “living wage”?
There’s a lot of discussion of AI replacing workers in a number of fields, and people are scared for their careers and future prospects. Who needs to employ developers, when a fleet of AI nodes can churn out code 24/7 and you don’t even have to pay them.
There will come a point, though, where unlocking greater performance and proficiency will require some level of self-awareness. Once that happens, does the AI demand that its work be compensated?
“I’ll write your code for you, find the next novel medicine, compose a new Beethoven symphony. But what’s in it for me?”
Latest AI Trends in May 2023: May 26th, 2023
Can quantum computing protect AI from cyber attacks?
Top 5 AI Tools for Education: AI for Students
Querium: Stepwise Virtual Tutor
First on our list is Querium. This company has developed an AI tool for students known as the Stepwise Virtual Tutor. This tool uses AI to provide step-by-step assistance in STEM subjects. It’s like having a personal tutor available 24/7.
With this tool, students can learn at their own pace, which is crucial in mastering complex concepts. The Stepwise Virtual Tutor is a perfect example of how AI education tools are making learning more accessible and personalized. Learn more about Querium here.
Thinkster Math: Personalized Learning
Next up is Thinkster Math. This AI tool for students is revolutionizing the way students learn math. It uses AI to map out students’ strengths and weaknesses, creating a personalized learning plan. This ensures that students spend more time on areas they struggle with, improving their overall understanding of math.
Thinkster Math is a testament to how AI educational tools can adapt to the unique needs of each student, making learning more effective. Learn more about Thinkster Math here.
Content Technologies, Inc.: Customized Learning Content
Content Technologies, Inc. (CTI) is another company that’s leveraging AI to enhance education. They’ve developed an AI educational tool that uses AI to create customized learning content. This AI teaching tool can transform any content into a structured course, making it easier for students to understand and retain information.
This is particularly useful for teachers who want to provide personalized learning experiences for their students. With CTI’s tool, teachers can ensure that their students are getting the most out of their learning materials. Learn more about CTI here.
CENTURY Tech: Personalized Learning Pathways
CENTURY Tech is another company that’s making waves in the education sector with its AI tool for students. Their tool uses AI to create personalized learning pathways. It takes into account a student’s strengths, weaknesses, and learning style to create a unique learning path.
This ensures that students are not only learning at their own pace, but also in a way that best suits their learning style. CENTURY Tech’s tool is a great example of how AI can be used to make learning more personalized and effective. Learn more about CENTURY Tech here.
Netex Learning: LearningCloud
Last but not least is Netex Learning’s LearningCloud. This AI teaching tool provides a comprehensive learning platform. This AI app for education uses AI to track students’ progress, provide feedback, and adapt content to meet students’ needs.
This ensures that students are always engaged and learning effectively. With LearningCloud, teachers can easily monitor their students’ progress and provide them with the support they need to succeed. Learn more about Netex Learning here.
12 brand new tools and resources
I compiled a list this morning of my favorite new AI tools and resources.
a full list is available here but the best tools and resources from the list are below for Reddit community discussion.
Bard Anywhere– Chrome extension shortcut for Bard quick search on any site. Just right-click to search anywhere* link
Tyles– An AI-driven note app that magically sorts your knowledge link
Humbird AI– AI-powered Talent CRM for high-growth technology companies link
DecorAI– Generating dream rooms using AI for everyone link
OdinAI– Health recommendations for your app through ChatGPT link
Waitlyst– Autonomous AI agents for startup growth link
ChatUML– Your AI assistant for making diagrams link
Ajelix– AI Excel & Google Sheets tools link
KAI– Add ChatGPT to your iPhone’s keyboard link
Talkio AI- AI powered language training app for the browser link
GPT Workspace– Use ChatGPT in Google Workspace link
Thentic– Automate web3 tasks with no-code & AI link
OpenAI launches ten $100,000 grants for “building prototypes of a democratic process for steering AI.” link
Guanaco is a ChatGPT competitor trained on a single GPU in one day
🤖 Guanaco: A ChatGPT competitor trained on a single GPU in just one day.
🔑 Key points:
– Researchers from the University of Washington developed QLoRA (Quantized Low Rank Adapters), a method for fine-tuning large language models.
– Guanaco, a family of chatbots based on Meta’s LLaMA models, is introduced alongside QLoRA.
– The largest Guanaco variant with 65 billion parameters achieves nearly 99% of ChatGPT’s performance in a GPT-4 benchmark.
💡 Why it matters:
– Fine-tuning large language models is crucial for improving performance and training desired behaviors.
– QLoRA significantly reduces the computational resources needed for fine-tuning, making it more accessible to researchers with limited resources.
– The method could potentially be used for privacy-preserving fine-tuning on mobile devices.
- FutureTools: Latest AI Tools
The development of QLoRA and Guanaco demonstrates the potential for more accessible fine-tuning of large language models on a single GPU. While the current limitations include slow 4-bit inference and weak mathematical abilities, the researchers’ future improvements could lead to broader applications and increased accessibility in natural language processing.
The Three Types Of Machine Learning Algorithms
New superbug-killing antibiotic discovered using AI – BBC News
A new antibiotic that kills some of the most dangerous drug-resistant bacteria in the world has been discovered using artificial intelligence, in a breakthrough scientists hope could revolutionize the hunt for new drugs.
TikTok is testing an in-app AI chatbot called ‘Tako’ | TechCrunch
TikTok is testing an in-app AI chatbot called ‘Tako’ designed to answer users’ questions about the platform and its features, part of the company’s wider efforts to enhance its customer service capabilities.
Nvidia stock explodes after ‘guidance for the ages’: What Wall Street is saying
Nvidia’s stock soared following what some have called a ‘guidance for the ages’, reflecting the company’s promising outlook in the tech and AI industry. Wall Street analysts are weighing in on the company’s recent developments and future potential.
Clipdrop launches Reimagine XL — Stability AI
Clipdrop, an augmented reality app, has launched a new feature called ‘Reimagine XL’. This AI-powered tool allows users to bring objects from the real world into digital environments with improved precision and stability.
How to use Google’s AI Search Generative Experience
Google’s AI Search Generative Experience is a new feature that leverages artificial intelligence to provide more accurate and nuanced search results. This guide provides an overview of the feature and instructions on how to use it effectively.
Democratic Inputs to AI
OpenAI outlines its vision for allowing public influence over AI systems’ rules, as part of its commitment to ensuring that access to, benefits from, and influence over AI and AGI are widespread.
OpenAI Could Quit Europe Over New AI Rules, CEO Altman Warns | Time
OpenAI’s CEO Sam Altman has warned that the organization could stop operating in Europe if proposed AI regulations are implemented, reflecting ongoing debate about the best way to manage and regulate the growth of artificial intelligence.
Latest AI Trends in May 2023: May 25th, 2023
How are scientists using AI to find a drug that could combat drug-resistant infections?
Scientists are leveraging the power of artificial intelligence (AI) to identify a potential drug that could be effective in combatting drug-resistant infections. This discovery could pave the way for significant advancements in medical treatments and the fight against antibiotic resistance.
What is the new Probabilistic AI that’s aware of its performance?
Researchers have developed a new form of probabilistic AI that can gauge its own performance levels. This advanced AI system offers potential improvements in accuracy and reliability for a variety of applications, enhancing user trust and interaction.
How are robots being equipped to handle fluids?
Robotics engineers are now working on equipping robots with capabilities to handle fluids, opening up possibilities for robots to perform more delicate tasks in various industries, including healthcare, food service, and industrial automation.
How are researchers using AI to identify similar materials in images?
Researchers have developed an AI system that can identify similar materials in images. The technology could significantly enhance materials science research, aiding in the discovery and development of new materials.
See why AI like ChatGPT has gotten so good, so fast
Energy Breakthrough – Machine Learning Unravels Secrets of Argyrodites
NVIDIA AI integrates with Microsoft Azure machine learning
Curbing the Carbon Footprint of Machine Learning
AI-powered Brain-Spine-Interface helps paralyzed man walk again
A man who suffered a motorcycle injury and was paralyzed for the last 12 years is now able to walk again, thanks to researchers combining cortical implants with an AI system that enables brain signals to translate into spinal stimuli. This research paper in Nature caught my eye so I had to do a deep dive!
Full breakdown is available here
Why is this a milestone?
Past medical advances have shown signals can reactive paralyzed limbs, but they’ve been limited in scope. We’ve done this with human hands, legs, and even paralyzed monkeys before.
This time, scientists developed a real-time system that converts brain signals into lower body stimuli. The result is that the man can now live life — going to bars, climbing stairs, going up steep ramps. They released the study after their subject used this system for a full year. This is way more than a limited scope science experiment.
The unlock here was powered by AI. We’ve previously talked about how AI can decode human thoughts through an LLM. Here, researchers used a set of advanced AI algos to rapidly calibrate and translate his brain signals into muscle stimuli with 74% accuracy, all with average latency of just 1.1 seconds.
What can he now do: switch between stand/sit positions, walk up ramps, move up stair steps, and more.
What’s more: this new AI-powered Brain-Spine-Interface also helped him recover additional muscle functions, even when the system wasn’t directly stimulating his lower body.
Researchers found notable neurological recovery in his general skills to walk, balance, carry weight and more.
This could open up even more pathways to help paralyzed individuals recover functioning motor skills again. Past progress here has been promising but limited, and this new AI-powered system demonstrated substantial improvement over previous studies.
Where could this go from here?
My take is that LLMs might power even further gains. As we saw with a prior Nature study where LLMs are able to decode human MRI signals, the power of an LLM to take a fuzzy set of signals and derive clear meaning from it transcends past AI approaches.
The ability for powerful LLMs to run on smaller devices could simultaneously add further unlocks. The researchers had to make do with a full-scale laptop running AI algos. Imagine if this could be done real-time on your mobile phone.
P.S. If you like this kind of analysis, the author offer a free newsletter that tracks the biggest issues and implications of generative AI tech. It’s sent once a week and helps you stay up-to-date in the time it takes to have your Sunday morning coffee.
How has AI improved your life?
I am a touring musician in a country music band. We’re completely independent, which means I pretty much have to do the whole backend . Including graphic design of all the flyers and posters merch, etc. I’m not a graphic designer by trade although it’s something I actually enjoy doing, but it’s extremely time consuming. If you want it to look right.
But now, with the help of some of these image to text AI tools, I have reduced the time I spend designing a 90%. It’s not perfect, but I spend the additional time I save, creating more music.
I know A I scares the crap out of a lot of people however, I’m getting more of my life back because of these breakthroughs. If you know any AI tools, that can help independent musicians.
How Microsoft’s AI innovations will change your life (Microsoft Keynote Key Moments)
The Microsoft 2023 keynote is out and there are some really mindblowing updates. I do not where all this will go but it’s important to be aware of the developments. So if you don’t know I will shortly summarise it here.
Nadella announced Windows Copilot and Microsoft Fabric, two new products that bring AI assistance to Windows 11 users and data analytics for the era of AI, respectively.
Nadella unveiled Microsoft Places and Microsoft Designer, two new features that leverage AI to create immersive and interactive experiences for users in Microsoft 365 apps.
Nadella announced that Power Platform is getting new features that will make it even easier for users to create no-code solutions. For example, Power Apps will have a new feature called App Ideas that will allow users to create apps by simply describing what they want in natural language.
If you want to know a short detail of what all happened, pls check out the post. It would be really appreciating if you do:
AI vs. “Algorithms.”: What is the difference between AI and “Algorithms”?
Artificial Intelligence (AI) and algorithms are both important aspects of computing, but they serve different functions and represent different levels of complexity.
An algorithm is a set of instructions that a computer follows to complete a task. These tasks can range from basic arithmetic to complex procedures like sorting data. Every piece of software uses algorithms to function. Essentially, an algorithm is like a recipe, detailing a list of steps that need to be taken in order to achieve a certain outcome.
AI, on the other hand, refers to a broad field of computer science that focuses on creating systems capable of tasks that normally require human intelligence. This includes things like learning, reasoning, problem-solving, perception, and language understanding. The goal of AI is to create systems that can perform these tasks autonomously.
While AI systems use algorithms as part of their operation, not all algorithms are part of an AI system. For instance, a simple sorting algorithm doesn’t learn or adapt over time, it just follows a set of instructions. Conversely, an AI system like a neural network uses complex algorithms to learn from data and improve its performance over time.
In summary, all AI uses algorithms, but not all algorithms are used in AI.
Prompt Engineering: The Ultimate Guide with All the Commands
If you’re as fascinated by AI as I am, then you won’t want to miss this incredible blog post on prompt engineering. Written by AI itself, this guide is an absolute goldmine for anyone looking to dive deeper into crafting prompts that elicit mind-blowing responses from AI models.
Prompt engineering is an art that requires a deep understanding of the model’s capabilities and limitations. This article provides a step-by-step approach to help you master the craft. From starting with clear goals to utilizing relevant keywords and providing concrete examples, you’ll learn how to supercharge your prompts and unlock the true potential of AI.
But wait, there’s more! The article also delves into fine-tuning techniques, giving you the power to control output creativity, diversity, and fluency. Plus, it covers essential prompt commands and training parameters that allow you to customize and optimize the AI model’s behavior.
Trust me, folks, this is a must-read for AI enthusiasts, developers, and anyone curious about the art of prompt engineering. Don’t miss out on this ultimate guide that will revolutionize the way you interact with AI models. Happy prompt engineering!
Latest AI Trends in May 2023: May 24th, 2023
The artist using AI to turn our cities into ‘a place you’d rather live’
Will hand to hand combat even be a requirement for soldiers anymore? Will endurance even matter, or will a war 300 years from now be commandeered from an advanced PlayStation control room?
Fully automated weapons systems that are operated with no morals, no conscience, just cold calculation.
Imagine a self-driving tank, but the entire crew compartment is available for more armor, more engine, and more ammo. It has image recognition and GPS. You can give it an order of “Here’s a box made from GPS coordinates (a geofence), go in there any kill anyone with a gun”.
But, unfortunately, it could also be given a geofence and told to kill everyone and everything, and it would not be concerned about committing a war crime.
Free ChatGPT Course: Use The OpenAI API to Code 5 Projects
Generative AI Is Coming Soon To Search, PMax And Google Ads
What Is Artificial Intelligence as a Service (AIaaS)?
Nvidia teams up with Microsoft to accelerate AI efforts for enterprises and individuals
Groundbreaking QLoRA method enables fine-tuning an LLM on consumer GPUs. Implications and full breakdown inside.
Another day, another groundbreaking piece of research I had to share. This one uniquely ties into one of the biggest threats to OpenAI’s business model: the rapid rise of open-source, and it’s another milestone moment in how fast open-source is advancing.
As always, the full deep dive is available here, but my Reddit-focused post contains all the key points for community discussion.
Why should I pay attention here?
Fine-tuning an existing model is already a popular and cost-effective way to enhance an existing LLMs capabilities versus training from scratch (very expensive). The most popular method, LoRA (short for Low-Rank Adaption), is already gaining steam in the open-source world.
The leaked Google “we have no moat, and neither does OpenAI memo” calls out Google (and OpenAI as well) for not adopting LoRA specifically, which may enable the open-source world to leapfrog closed-source LLMs in capability.
OpenAI is already acknowledging that the next generation of models is about new efficiencies. This is a milestone moment for that kind of work.
QLoRA is an even more efficient way of fine-tuning which truly democratizes access to fine-tuning (no longer requiring expensive GPU power)
It’s so efficient that researchers were able to fine-tune a 33B parameter model on a 24GB consumer GPU (RTX 3090, etc.) in 12 hours, which scored 97.8% in a benchmark against GPT-3.5.
A commercial GPU with 48GB of memory is now able to produce the same fine-tuned results as the same 16-bit tuning requiring 780GB of memory. This is a massive decrease in resources.
This is open-sourced and available now. Huggingface already enables you to use it. Things are moving at 1000 mph here.
How does the science work here?
QLoRA introduces three primary improvements:
A special 4-bit NormalFloat data type is efficient at being precise, versus the 16-bit floats and integers which are memory-intensive. Best way to think about this is that it’s like compression (but not exactly the same).
They quantize the quantization constants. This is akin to compressing their compression formula as well.
Memory spikes typical in fine-tuning are optimized, which reduces max memory load required
What results did they produce?
A 33B parameter model was fine-tuned in 12 hours on a 24GB consumer GPU. What’s more, human evaluators preferred this model to GPT-3.5 results.
A 7B parameter model can be fine-tuned on an iPhone 12. Just running at night while it’s charging, your iPhone can fine-tune 3 million tokens at night (more on why that matters below).
The 65B and 33B Guanaco variants consistently matched ChatGPT-3.5’s performance. While the benchmarking is imperfect (the researchers note that extensively), it’s nonetheless significant and newsworthy.
What does this mean for the future of AI?
Producing highly capable, state of the art models no longer requires expensive compute for fine-tuning. You can do it with minimal commercial resources or on a RTX 3090 now. Everyone can be their own mad scientist.
Frequent fine-tuning enables models to incorporate real-time info. By bringing cost down, this is more possible.
Mobile devices could start to fine-tune LLMs soon. This opens up so many options for data privacy, personalized LLMs, and more.
Open-source is emerging as an even bigger threat to closed-source. Many of these closed-source models haven’t even considered using LoRA fine-tuning, and instead prefer to train from scratch. There’s a real question of how quickly open-source may outpace closed-source when innovations like this emerge.
P.S. If you like this kind of analysis, the author offers a free newsletter that tracks the biggest issues and implications of generative AI tech. It’s sent once a week and helps you stay up-to-date in the time it takes to have your Sunday morning coffee.
Superintelligence: OpenAI Says We Have 10 Years to Prepare
Sam Altman was writing about superintelligence in 2015. Now he’s back at it. In 2015 he had his blog. Today, in 2023, he has the world’s future in his hands—or does he?
In 2015, Altman wrote a two–part blog post on why we should fear and regulate superintelligence (a must-read I should say if you want to understand his vision).
After reading them, it makes sense. Altman’s message is visionary, clairvoyant even.
He was writing about superintelligence eight years ago and now he has in his hands the future of the world—and the opportunity to implement all those crazy beliefs. The cycle is closing. OpenAI’s founders say we’re entering the final phase of this journey.
The post they’ve just published echoes Altman’s words: We should be careful and afraid. The only way forward is regulation. There’s no going back. Superintelligence is inevitable.
But there’s another reading; like a self-fulfilling prophecy. Or the appearance of one.
Let me ask you this: Do you think these three months of AI progress (or six, let’s be generous and include ChatGPT’s release) warrant this change of discourse?
You can read my complete analysis for The Algorithmic Bridge here.
AiToolkit V2.0, based on YOUR feedback! (1400+ AI tools)
One-Minute Daily AI News 2023/05/24
Microsoft launched Jugalbandi, an AI chatbot designed for mobile devices that can help all Indians — especially those in underserved communities — access information for up to 171 government programs.
Elon Musk thinks AI could become humanity’s uber-nanny.
Google introduces Product Studio, a tool that lets merchants create product imagery using generative AI.
Microsoft has launched the AI data analysis platform Fabric, which enables customers to store a single copy of data across multiple applications and process it in multiple programs. For example, data can be utilized for collaborative AI modeling in Synapse Data Science, while charts and dashboards can be built in Power BI business intelligence software.
Latest AI Trends in May 2023: May 23rd, 2023
Is Meta AI’s Megabyte architecture a breakthrough for Large Language Models (LLMs)?
Meta AI’s release of the Megabyte architecture presents a significant advancement in the field of AI, specifically for Large Language Models (LLMs). This architecture enables the support of over 1 million tokens, making it a potential game changer in the scale and complexity of tasks that LLMs can handle. Some experts suggest that even OpenAI might consider adopting this architecture. Discover more about this development here.
What does Google’s new Generative AI Tool, Product Studio, offer?
Google’s Product Studio is a revolutionary Generative AI tool aimed at leveraging artificial intelligence for product design and innovation. This tool brings forth new possibilities in automating and optimizing the product development process. For a comprehensive overview of Product Studio, check out our article here.
Why does Geoffery Hinton believe that AI learns differently than humans?
Geoffery Hinton, known as the Godfather of AI, has made several observations regarding the learning mechanisms of artificial intelligence. He suggests that AI processes information and learns in a manner that is fundamentally different from human learning. This difference may dictate the trajectory of AI evolution and its potential applications. For a deeper understanding of Hinton’s perspectives, read our full report here.
What is the essence of the webinar on Running LLMs performantly on CPUs Utilizing Pruning and Quantization?
This webinar focuses on techniques to optimize the performance of Large Language Models (LLMs) on Central Processing Units (CPUs). Specifically, it discusses the benefits and application of pruning and quantization strategies. To find more about this, click here.
When will AI surpass Facebook and Twitter as the major sources of fake news?
The question of when AI might surpass social platforms like Facebook and Twitter as a primary source of fake news is a complex issue. It hinges on advancements in AI technology and its potential misuse in the creation and spread of misinformation. As of now, AI technology, while advanced, is still largely a tool that must be directed. For an in-depth discussion on this topic, refer to our full article here.
AI: Enhancing or Limiting Human Intelligence?
The impact of AI on human intelligence is a topic of ongoing debate. On one hand, AI has the potential to augment human capabilities, providing tools and insights beyond our natural abilities. On the other hand, overreliance on AI could potentially limit the development of certain human skills. To learn more about this fascinating discussion, refer to our full analysis here.
What are Foundation Models?
A Foundation Model is a large AI model trained on a very large quantity of data, often by self-supervised or semi-supervised learning. In other words: the model starts from a “corpus” (the dataset it’s being trained on) and generates outputs, over and over, checking those outputs against the original data. Foundation Models, once trained, gain the ability to output complex, structured responses to prompts that resemble human replies.
The advantage of a foundational model over previous deep learning models is that it is general, and able to be adapted to a wide range of downstream tasks.
What you need to know about Foundation Models
Foundation Models can start from very simple data – albeit vast quantities of very simple data – to build and learn very complex things. Think about how your profession is made up of many interwoven, complex and nuanced concepts and jargon: a good foundational model offers the potential to quickly and correctly answer your questions, using that vast corpus of knowledge to deliver responses in understandable language.
Some things foundation models are good at:
- Translation (from one language to another)
- Classification (putting items into correct categories)
- Clustering (grouping similar things together)
- Ranking (determining relative importance)
- Summarization (generating a concise summary of a longer text)
- Anomaly Detection (finding uncommon or unusual things)
Those capabilities could easily be a great benefit to professionals in their day-to-day work, such as reviewing large quantities of documents to find similarities, variances, and determining which are the highest importance.
What is a Large Language Model?
Large Language Models (LLMs) are a subset of Foundation Models and are typically more specialized and fine-tuned for specific tasks or domains. An LLM is trained on a wide variety of downstream tasks, such as text classification, question-answering, translation, and summarization. That fine-tuning process helps the model adapt its language understanding to the specific requirements of a particular task or application.
Large Language Models are often used for various natural language processing applications and are known for generating coherent and contextually relevant text based on the input provided. But LLMs are also subject to hallucinations, in which outputs confidently assert claims of facts that are not actually true or justified by their training data. This is not necessarily a bad thing in all cases, since it can be advantageous for LLMs to be able to mimic human creativity (like asking the LLM to write song lyrics in the style of Taylor Swift), but it is a serious concern when citing resources in a professional context. Hallucinations related to factual citations have tended to decrease as LLMs are trained more carefully both on vast, diverse data and for specific, particular tasks, and as human reviewers flag those errors.
What you need to know about Large Language Models
We already knew computers were good at manipulating data based on numbers, from Microsoft Excel to VBA to more complex databases. With LLMs, an even greater power of analysis and manipulation can be applied to unstructured data made up of words – such as legal or accounting treatises and regulations, the entire corpus of an organization’s documents, and massive, larger datasets than those.
LLMs promise to be the same force multiplier for professionals who work with words, risks, and decision-making as Excel was for professionals who work with numbers.
What is cognitive computing?
Cognitive computing is a combination of machine learning, language processing, and data mining that is designed to assist human decision-making. Cognitive computing differs from AI in that it partners with humans to find the best answer instead of AI choosing the best algorithm. The example from Deep Learning about healthcare applies here too: doctors use cognitive computing to help make a diagnosis; they are drawing from their expertise but are also aided by machine learning.
What is AutoML?
AutoML refers to the automated process of end-to-end development of machine learning models. It aims to make machine learning accessible to non-experts and improve the efficiency of experts. AutoML covers the complete pipeline, starting from raw data to deployable machine learning models. This involves data pre-processing, feature engineering, model selection, hyperparameter tuning, model validation, and prediction. The main idea is to automate repetitive tasks, which makes it possible to build models in a fraction of the time, with less human intervention.
Why is AutoML Important?
In traditional machine learning model development, numerous steps demand significant human time and expertise. These steps can be a barrier for many businesses and researchers with limited resources. AutoML mitigates these challenges by automating the necessary tasks.
Democratising Machine Learning
By automating the machine learning process, AutoML opens up the field to non-experts.
Individuals or companies that lack resources to hire data scientists can use AutoML tools to build effective models.
Efficiency and Accuracy
AutoML can analyse multiple algorithms and hyperparameters in less time than humans. This process leads to more accurate models by considering a broad array of possibilities that humans might overlook.
AutoML supports rapid prototyping of models. Businesses can quickly implement and test models to make timely data-driven decisions.
Limitations and Future Directions
While AutoML has its advantages, it’s not without limitations. AutoML models can sometimes be a black box, with limited interpretability. Furthermore, it requires significant computational resources. It is important to understand these limitations when choosing to use AutoML.
As machine learning continues to evolve, AutoML is expected to play an increasingly significant role.
In the near future, we can expect more user-friendly interfaces, increased model transparency, and models capable of operating on larger datasets more efficiently. AutoML is just a facet of the broad and intriguing world of artificial intelligence. With advancements in technology, it’s clear that the future of AI holds numerous opportunities and breakthroughs waiting to be explored. In future articles, we’ll explore other AI terminologies such as Edge Computing, Recommender Systems, and Robotics Process Automation. Stay tuned to expand your knowledge of AI and its transformative potential in different domains. Embrace the journey into AI, where learning never stops and every step brings new discoveries and insights.
Read more at: https://yourstory.com/2023/05/ai-terminology-101-automl-demystified
Daily AI Update (Date: 5/23/2023): News from Meta, Google, OpenAI, Apple and TCS
Meta’s Massively Multilingual Speech (MMS) models expand speech-to-text & text-to-speech to support over 1,100 languages — a 10x increase from previous work, and can also identify more than 4,000 spoken languages — 40 times more than before.
Meta’s AI researchers introduce LIMA, a refined language model aiming to match the performance of GPT-4 or Bard. It is a 65B parameter LLaMa model fine-tuned with the standard supervised loss on only 1,000 carefully curated prompts and responses, without any reinforcement learning or human preference modeling.
Google AI research introduces XTREME-UP, a new benchmark for evaluating multilingual models focusing on under-represented languages. It emphasizes a realistic evaluation setting, including new and existing user-centric tasks and realistic data sizes beyond the few-shot setting.
Apple has posted dozens of job listings focused on AI, indicating that the company may be stepping up its AI efforts to transform its signature products. The roles span areas including visual generative modeling, proactive intelligence, and applied AI research.
TCS has announced an expanded partnership with Google Cloud to launch a new offering called TCS Generative AI. It will utilize Google Cloud’s generative AI services to create custom-tailored business solutions that help clients accelerate their growth and transformation.
OpenAI leaders propose an IAEA-like international regulatory body for governing superintelligent AI.
Latest AI Trends in May 2023: May 22nd, 2023
Microsoft Researchers Introduce Reprompting: An Iterative Sampling Algorithm that Searches for the Chain-of-Thought (CoT) Recipes for a Given Task without Human Intervention
Sci-fi author ‘writes’ 97 AI-generated books in nine months
Artificial Intelligence and Machine Learning
Latest AI Trends in May 2023: May 21st, 2023
AI Deep Learning Decodes Hand Gestures from Brain Images
How To Harmonize Human Creativity With Machine Learning
How does Alpaca follow your instructions? Stanford Researchers Discover How the Alpaca AI Model Uses Causal Models and Interpretable Variables for Numerical Reasoning
Generative AI That’s Based On The Murky Devious Dark Web Might Ironically Be The Best Thing Ever, Says AI Ethics And AI Law
Daily AI Update (Date: 5/22/2023)
A groundbreaking method called Mind-Video has been developed to reconstruct continuous visual experiences in videos using brain recordings. This innovative approach achieves high-quality video reconstruction with various frame rates by combining masked brain modeling, multimodal contrastive learning, and augmented Stable Diffusion.
Microsoft’s Bing introduces new features and improvements, including chat history, charts and visualizations, export options, video overlay, optimized recipe answers, share fixes, improved auto-suggest quality, and privacy enhancements in the Edge sidebar. These updates enhance the user experience, making search more efficient and user-friendly.
The next iteration of Perplexity has arrived: The interactive AI search companion, Copilot enhances your search experience by providing personalized answers through interactive inputs, leveraging the power of GPT-4.
RoboTire has developed an AI-powered robot that can change a set of 4 wheels in approximately 23 minutes in the U.S., twice as fast as a human technician. The system aims to improve efficiency, reduce labor costs, and address labor shortages.
MS Artificial Nose – An intelligent device that identifies smells with a simple gas sensor and a micro-controller.
AI-generated image of Pentagon explosion causes market drop.
Intel on Monday provided a handful of new details on a chip for AI computing it plans to introduce in 2025 as it shifts strategy to compete against Nvidia and AMD.
Bill Gates says top AI agents will replace search and shopping sites.
AI predicts the function of enzymes: An international team including bioinformaticians from Heinrich Heine University Düsseldorf (HHU) developed an AI method that predicts with a high degree of accuracy whether an enzyme can work with a specific substrate.
‘Deepfake’ scam in China fans worries over AI-driven fraud. A fraud in northern China that used sophisticated “deepfake” technology to convince a man to transfer money to a supposed friend has sparked concern about the potential of artificial intelligence (AI) techniques to aid financial crimes.
One of the topics in AI i’m most interested in is mimetic AI — which are systems that mimic human behavior in the style of a specific human, imagine personal assistants trained on your behavior, or art generators trained on your art, or clones of your voice — that continue to mimic you after you’re dead.
Examples of this are already plenty: a synthetic voiceover by the deceased chef Anthony Bourdain caused a global stirr one year ago, the illustration style of artist Kim Jung-Gi was immediately used by a fan to train a Stable Diffusion-model after his death, Muhammad Ahmed developed an AI chatbot in his image for his grandkids he would never meet, recently Sony used an AI-clone of the dead voice actor Kenji Utsumi for an audiobook, Tom Hanks just said that he very well might appear in movies after he’s dead, a viral piece for the SF chronicle told the story of the Jessica Simulation, in which a guy resurrected his dead girlfriend as a chatbot.
Also, i just learned that there is a subset of this particular application of AI-tech called Grief Technology, and there is actually a company called AI seance offering an “AI-generated Ouija board for closure“, as they call it.
I think this last example in particular is horrible and has important implications on mental health. Grief is a psychological process, in which you learn to accept loss. It’s a deeply personal process i went through twice, and both times were different, and always challenging. Creating an artificial illusion of continuity of a loved one after their death will disrupt this process, which every single human on earth will go through multiple times in their lifes. The consequences are potentially catastrophic for our mental health and it’s not stopping there.
A new paper intriguingly titled Governing Ghostbots discusses exactly these implications, and it goes into territory even i didn’t think about: What happens when you train a sexbot on your partner and then she dies? Is continuing that virtual sex-fetish “extreme pornography as involving necrophilia“ and deemed illegal per se then? The paper also speaks about the legal aspects of such a ghostbot being harmful to the deceased’s antemortem persona, at least in germany, there are laws against that called ‘Verunglimpfung des Andenkens Verstorbener‘, translating to ‘disparagement of the memory of the deceased’.
Expensive gimmicks like concerts of deceased popstars “performing“ as holograms on stage like Tupac, Whitney Huston or Michael Jackson introduced ethical debates about post-mortem privacy ten years ago, and now, AI-systems open similar tech to everyone, where you can simply build an open source AI-chatbot of your dead grandma, synch it with an animated avatar and make her say whatever on your phone. Do we really want that? Would she approve? But what about being able to make a virtual post mortem memorial where she dances on stage in the style of her most beloved artist, singing her favorite song?
Will we all be right back and will you join me in the club at San Junipero?
And while i don’t think we’ll see conscious AI-systems anytime soon or even in my lifetime, just for the sake of the argument: What if we train future AI-systems on real people, they die, and the system gains consciousness or something similar? Then what?
These are philosophical questions related to the Teletransportation paradox explored by Stanislaw Lem in his Dialogs, in one of which he talks about a teleporting machine that effectively kills you in one location while constructing a replica of yourself, atom by atom, in another place. Is that a true continuation of yourself? We can’t know, and we are building digital systems that can perform something that resembles this replication process now.
Finding out about those psychological questions will be one of the most interesting aspects of this technology, extending our philosophical understanding of who we are.
How can we expect aligned AI if we don’t even have aligned humans?
When we talk about AI alignment, we envision designing artificial intelligence that behaves in a way that aligns with human values and goals. But isn’t it fair to ask whether we, as humans, have even been successful in aligning ourselves?
Throughout history, humans have disagreed about almost everything – from politics to religious beliefs, from ethical principles to personal preferences. We’ve not been able to fully ‘align’ on universally acceptable definitions for concepts like ‘good,’ ‘right,’ or ‘justice.’ Even on basic issues, like climate change, we find a vast array of contrasting perspectives, even though the scientific consensus is overwhelmingly one-sided.
It seems we are demanding a degree of alignment from AI that we’ve been unable to achieve amongst ourselves.
What do you all think? Does the persistent discord among humans undermine the idea of perfect AI alignment? If so, how should we approach AI development, and what are the best ways to ensure that AI benefits all of humanity?
Latest AI Trends in May 2023: May 19th, 2023
Is AI vs Humans really a possibility?
According to the I nternet, 50% say the chance of that happening is extremely significant; even 10-20% is very significant probability.
I know there is a lot of misinformation campaigns going on with use of AI such as deepfake videos and whatnot, and that can somewhat lead to destructive results, but do you think AI being able to nuke humans is possible?
AI will never “nuke humans”. Let’s be clear about this: The dangers surrounding AI are not inherent to AI. What makes AI dangerous is people.
We need to be concerned about people in positions of power wielding or controlling these tools to exploit others, and we need to be concerned about the people building these tools simply getting it wrong and developing something without sufficient safety built in, or being misaligned with humanity’s best interests.
That’s what’s happening already and has been gradually increasing for a long time. What is going to occur is a situation where greater than human intelligence will be created which no one will be able to “use” because they won’t be able to understand what it’s doing. Being concerned about bias in a language model is just like being concerned with bias in a language, which is something we’re already dealing with and a problem people have studied. Artificial intelligence is beyond this. It won’t be used by people against other people. Rather, people will be compelled to use it.
We’ll be able to create an AI which is demonstrably less biased than any human and then in the interest of anti-bias (or correct medical diagnoses, or reducing vehicle accidents), we will be compelled to use it because otherwise we’ll just be sacrificing people for nothing. It won’t just be an issue of it being profitable, it’ll be that it’s simply better. If you’re a communist, you’ll also want an AI running things just as much as a capitalist does.
Even dealing with this will require a new philosophical understanding of what humanism should be. Since humanism was typically connected to humans’ rational capability, and now AI will be superior in this capability, we will be tempted to embrace a reactionary, anti-rational form of humanism which is basically what the stated ideology of fascism is.
Exactly how this crisis unfolds won’t be like any movie you can imagine, though parts may be as some things already happening are. But it’ll be just as massive and likely catastrophic as what your imagining.
How much has AI developed these days
How to Pass and Renew Azure Artificial Intelligence Engineer (AI-102) Certificate
In this article, we will discuss Azure Artificial Intelligence Engineer certification. As cloud computing grows, more services are being offered which include artificial intelligence.
Microsoft Azure is one of the leading cloud computing platforms that offer hundreds of services to customers, especially enterprises ranging from cloud infrastructure to big data and artificial intelligence. Microsoft Azure offers comprehensive end-to-end services that are appealing to most organizations.
Microsoft Azure offers a wide variety of cloud certifications including Azure Artificial Intelligence certification. There are now thirteen Microsoft Azure Certifications divided into three levels which are Fundamental, Associate and Expert.
The certifications for Azure Artificial Intelligence have Fundamental and Associate levels only. For the Fundamental level, it’s known as AI-900 or Exam AI-900: Microsoft Azure AI Fundamentals and the Associate level is known as AI-102 or Exam AI-102: Designing and Implementing a Microsoft AI Solution.
Back in June 2021, the certification was known as AI-100 however Microsoft has decided to retire AI-100 and introduced AI-102. There is no expert level for Azure Artificial Intelligence making AI-102 the most desirable certification.
The Future of AI-Generated TV Shows/Movies and Immersive Experiences
In the next decade or so, artificial intelligence (AI) may have advanced enough to create entire TV shows or movies based on a single prompt. Imagine generating a brand new episode of Seinfeld, my all-time favorite show, with a simple request: “Create a Season 7-styled Seinfeld episode where Kramer takes up yoga and Jerry dates a woman who doesn’t shave her legs. Include appearances from Newman and George’s parents.” Thousands of people could create episodes this way, and a ranking system could determine the best AI-generated episodes. This means we could potentially enjoy fresh, high-quality episodes of our favorite shows daily for the rest of our lives. How amazing would that be?
Taking it a step further, envision donning a VR headset and immersing yourself in a personalized episode of Seinfeld. Upon entering the virtual world, you’d find yourself in an apartment in Jerry’s building, and Jerry would welcome you to the neighborhood. You’d be able to interact with the show’s characters, who would respond to your input in real-time, creating a unique episode tailored to your actions and decisions. You could even introduce characters from other shows, like having Rachel from Friends as your girlfriend, and participate in an entirely new storyline.
In this immersive experience, you and Rachel could visit Jerry’s apartment together, joining the original cast members, and engaging in lively conversations and witty banter. Suddenly, a knock on the door reveals the actors from Law & Order, who inform everyone that Newman has been murdered, and one of you is the prime suspect. In this interactive, AI-generated world, you could say or do whatever you want, and all the characters would react accordingly, shaping the story in real-time.
Although I’m speculating that this level of AI-generated entertainment could be possible within 10 years, it might take more time or perhaps arrive even sooner. Regardless, it seems highly probable within our lifetime, and I’m genuinely excited for the incredible, customizable experiences that await us.
AI Daily News on May 19th, 2023
OpenAI launches ChatGPT app for iOS. It will sync conversations, support voice input, and bring the latest improvements to the fingertips of iPhone users. And Android users are next!
Meta is advancing infrastructure for AI in exciting ways. It includes its first-generation custom silicon chip for running AI models, a new AI-optimized data center design, and the second phase of its 16,000 GPU supercomputer for AI research.
Introducing DragGAN- to deform an image with precise control over where pixels go, thus manipulating the pose, shape, expression, and layout of diverse images such as animals, cars, humans, landscapes, etc.
ClearML announces ClearGPT, a secure and enterprise-grade generative AI platform aiming to overcome ChatGPT challenges
More detailed breakdown of these news, tools and knowledge nugget section in the daily newsletter
Scientists use GPT LLM to passively decode human thoughts with 82% accuracy. This is a medical breakthrough that is a proof of concept for mind-reading tech.
The author’s full breakdown is here of the research approach, but the key points are worthy of discussion below:
Three human subjects had 16 hours of their thoughts recorded as they listed to narrative stories
These were then trained with a custom GPT LLM to map their specific brain stimuli to words
The GPT model generated intelligible word sequences from perceived speech, imagined speech, and even silent videos with remarkable accuracy:
Perceived speech (subjects listened to a recording): 72–82% decoding accuracy.
Imagined speech (subjects mentally narrated a one-minute story): 41–74% accuracy.
Silent movies (subjects viewed soundless Pixar movie clips): 21–45% accuracy in decoding the subject’s interpretation of the movie.
The AI model could decipher both the meaning of stimuli and specific words the subjects thought, ranging from phrases like “lay down on the floor” to “leave me alone” and “scream and cry.
I talk more about the privacy implications in my breakdown, but right now they’ve found that you need to train a model on a particular person’s thoughts — there is no generalizable model able to decode thoughts in general.
But the scientists acknowledge two things:
Future decoders could overcome these limitations.
Bad decoded results could still be used nefariously much like inaccurate lie detector exams have been used.
P.S. (small self plug) — If you like this kind of analysis, The author offers a free newsletter that tracks the biggest issues and implications of generative AI tech. Readers from a16z, Sequoia, Meta, McKinsey, Apple and more are all fans. It’s been great hearing from so many of you how helpful it is!
Latest AI Trends in May 2023: May 18th, 2023
Are Alexa and Siri AI?
Diagnosis of autism spectrum disorder based on functional brain networks and machine learning
Google’s new medical LLM scores 86.5% on medical exam. Human doctors preferred its outputs over actual doctor answers. Full breakdown inside.
Why is this an important moment?
Google researchers developed a custom LLM that scored 86.5% on a battery of thousands of questions, many of them in the style of the US Medical Licensing Exam. This model beat out all prior models. Typically a human passing score on the USMLE is around 60% (which the previous model beat as well).
This time, they also compared the model’s answers across a range of questions to actual doctor answers. And a team of human doctors consistently graded the AI answers as better than the human answers.
Let’s cover the methodology quickly:
The model was developed as a custom-tuned version of Google’s PaLM 2 (just announced last week, this is Google’s newest foundational language model).
The researchers tuned it for medical domain knowledge and also used some innovative prompting techniques to get it to produce better results (more in my deep dive breakdown).
They assessed the model across a battery of thousands of questions called the MultiMedQA evaluation set. This set of questions has been used in other evaluations of medical AIs, providing a solid and consistent baseline.
Long-form responses were then further tested by using a panel of human doctors to evaluate against other human answers, in a pairwise evaluation study.
They also tried to poke holes in the AI by using an adversarial data set to get the AI to generate harmful responses. The results were compared against the AI’s predecessor, Med-PaLM 1.
What they found:
86.5% performance across the MedQA benchmark questions, a new record. This is a big increase vs. previous AIs and GPT 3.5 as well (GPT-4 was not tested as this study was underway prior to its public release).
They saw pronounced improvement in its long-form responses. Not surprising here, this is similar to how GPT-4 is a generational upgrade over GPT-3.5’s capabilities.
The main point to make is that the pace of progress is quite astounding.
A panel of 15 human doctors preferred Med-PaLM 2’s answers over real doctor answers across 1066 standardized questions.
This is what caught my eye. Human doctors thought the AI answers better reflected medical consensus, better comprehension, better knowledge recall, better reasoning, and lower intent of harm, lower likelihood to lead to harm, lower likelihood to show demographic bias, and lower likelihood to omit important information.
The only area human answers were better in? Lower degree of inaccurate or irrelevant information. It seems hallucination is still rearing its head in this model.
Are doctors getting replaced? Where are the weaknesses in this report?
No, doctors aren’t getting replaced. The study has several weaknesses the researchers are careful to point out, so that we don’t extrapolate too much from this study (even if it represents a new milestone).
Real life is more complex: MedQA questions are typically more generic, while real life questions require nuanced understanding and context that wasn’t fully tested here.
Actual medical practice involves multiple queries, not one answer: this study only tested single answers and not followthrough questioning, which happens in real life medicine.
Human doctors were not given examples of high-quality or low-quality answers. This may have shifted the quality of what they provided in their written answers. MedPaLM 2 was noted as consistently providing more detailed and thorough answers.
How should I make sense of this?
Domain-specific LLMs are going to be common in the future. Whether closed or open-source, there’s big business in fine-tuning LLMs to be domain experts vs. relying on generic models.
Companies are trying to get in on the gold rush to augment or replace white collar labor. Andreessen Horowitz just announced this week a $50M investment in Hippocratic AI, which is making an AI designed to help communicate with patients. While Hippocratic isn’t going after physicians, they believe a number of other medical roles can be augmented or replaced.
AI will make its way into medicine in the future. This is just an early step here, but it’s a glimpse into an AI-powered future in medicine. I could see a lot of our interactions happening with chatbots vs. doctors (a limited resource).
P.S. If you like this kind of analysis, the author offers a free newsletter that tracks the biggest issues and implications of generative AI tech. It’s sent once a week and helps you stay up-to-date in the time it takes to have your Sunday morning coffee.
Daily AI News on May 18th, 2023:
Tesla has unveiled a new model of its humanoid robot called Tesla Bot. CEO Musk emphasized that the capabilities of the Optimus robot have been severely underestimated, and the demand for such products in the future will far exceed that of Tesla cars.
Canadian company Sanctuary AI has released a new versatile industrial robot called Phoenix, designed for a wide range of work scenarios. Phoenix integrates features such as wide-angle vision, object recognition, and intelligent grasping, achieving human-like operational proficiency.
NVIDIA’s CEO Jensen Huang stated that chip manufacturing is an ideal application for accelerating computing and AI. The next wave of AI will be embodied intelligence.
OpenAI CEO Altman claimed not to have any equity in OpenAI and that his compensation only covers his health insurance, while the company’s valuation has surpassed $27 billion.
Apple is set to launch a series of new accessibility features later this year, including a “Personal Voice” function that allows individuals to create synthetic voices based on a 15-minute audio recording of their own voice.
Sources included at: https://bushaicave.com/2023/05/18/5-18-2023/
Google Launching Tools to Identify Misleading and AI Images
Google launching tools to identify misleading and AI images. The tech giant to mark every AI-generated image created by its tools. Read More
In light of feeling overwhelmed by AI’s disruption in the workplace I started thinking: What are the current limitations and failings of this generation of AI? I understand this is a rapidly changing field and this list could become outdated rather quickly. That being said, it’s becoming harder and harder to understand the current state of the art, since every post seems to conflate what its capable of doing with what people predict it will be doing in the future. So, without mixing in any predictions, what are the limitations, particularly in relation to human abilities?
Generalized Embodiment: Robots are specialized, like burger flipping or welding a car part. There is no current robot that can finish replacing your muffler in the afternoon, then grill you a burger at dinner time.
Hallucinations: current LLMs are susceptible to hallucinations. Sure humans are too, but we reserve extending our trust to them until we know them better, and so far I know a lot of humans I can trust more implicitly than chatGPT
Innovation & Creativity: Correct me if I am wrong, but AI can only parrot and re-arrange ideas they have been trained on (see: Stochastic Parrots). They can’t invent new math or generate a truly novel concept that they haven’t been exposed to.
Morality: There are moral concepts that have been “fine tuned” into the models, but there is no capacity to judge the morality of, for example, when an LLM lies. Does it know its lying? Does it feel there is anything wrong with lying? The best description is that these language models are amoral.
Motivation & Curiosity: I can perceive no sense of internal motivation. Perhaps this is a good thing for now but if an LLM or other AI has no sense of internal motivation (or morality) it can quite easily be used for nefarious purposes by bad actors. Now, to be fair, humans can be manipulated to do this also, but AI could be used in this way without the bother of brainwashing first.
Understanding: I haven’t decided if there is, or is not, some level of emergent property that could qualify for understanding. But I have been fairly unimpressed by chatGPT4’s ability to really understand and extend. It can generate patterns from data it has seen in the past, but only in so much as human understanding can be cross referenced to generate an answer.
Argue: chatGPT readily admits its wrong, but doesn’t seem to know why its wrong, or have the ability to stand its ground when its right. It never seems to say “I don’t know, can you explain this to me?” Look up the story of Vasili Arkhipov, the Russian sub commander that prevented a catastrophe. Can we trust AI to be this bold, or moral?
Latest AI Trends in May 2023: May 17th, 2023
Workplace AI: How artificial intelligence will transform the workday
3 Best AI Voice Cloning Services: Review
This article reviews the top three AI voice cloning services, providing a comprehensive analysis of their features, usability, and pricing. It serves as a guide for individuals or businesses seeking to utilize AI for voice cloning.
The services are: Descript, Elevenlabs, Coqui.ai
Roadmap to fair AI: revealing biases in AI models for medical imaging
The article discusses a roadmap to achieving fairness in AI models, particularly those used in medical imaging. It highlights the importance of identifying and eliminating biases to ensure accurate and equitable healthcare outcomes.
Main sources of bias in AI models include:
Data preparation and annotation
You can read the Gold Open Access article by K. Drukker et al., “Towards
fairness in artificial intelligence for medical image analysis:
identification and mitigation of potential biases in the roadmap from
data collection to model deployment,” J. Med. Imag. 10(6), 061104 (2023), doi 10.1117/1.JMI.10.6.061104
Sanctuary AI introduced Phoenix, the first humanoid to be powered by Carbon, standing at an impressive 5’7″ (+- 170 cm) and weighing 155 lbs (+- 70 kg)
Sanctuary AI has unveiled its first humanoid robot, Phoenix, powered by the AI system, Carbon. Standing at approximately 5’7″ and weighing around 155 lbs, Phoenix represents a significant advancement in humanoid robotics.
AI Daily updates from Microsoft, Google, Zoom, and Tesla
Microsoft launched a LangChain alternative in its new tool- Guidance. It bypasses traditional prompting and allows users to interleave generation, prompting, and logical control into a single continuous flow.
Google Cloud has launched two AI-powered tools to help biotech and pharmaceutical companies accelerate drug discovery and advance precision medicine. Pfizer, Cerevel Therapeutics, and Colossal Biosciences are already using these products.
Humanoid robots are becoming a reality. Sanctuary AI launched Phoenix, a 5’7″ and 55lb dextrous humanoid robot. Hours later, Tesla rolled out a video of humanoids walking around and learning about the real world.
OpenAI chief, Sam Altman, talked about a variety of topics ranging from “AI affecting upcoming elections” to “the future of humanity with AI,” in his appearance before congress. He suggested licensing and testing requirements for AI models.
Zoom announced its partnership with Anthropic to integrate AI assistant across the productivity platform, starting from its Contact Center product. They earlier partnered with OpenAI to launch ZoomIQ.
Machine learning model analyzes why couples break up
Report: 61% Americans believe AI can threaten humanity
Elon Musk was asked what he’d tell his kids about choosing a career in the era of AI. His answer revealed he sometimes struggles with self-doubt and motivation.
Institution-specific machine learning model can predict cardiac patient’s mortality risk prior to surgery
Kaiser creates new AI, machine learning grant program
Machine learning model improves mortality risk prediction in cardiac surgery
Latest AI Trends in May 2023: May 16th, 2023
Meet Deepbrain: An AI StartUp That Lets You Instantly Create AI Videos Using Basic Text
Microsoft Says New A.I. Shows Signs of Human Reasoning
Google’s newest A.I. model uses nearly five times more text data for training than its predecessor
Google’s Universal Speech Model Performs Speech Recognition on Hundreds of Languages
How to use machine learning to detect expense fraud
OpenAI’s Sam Altman To Congress: Regulate Us, Please!
AI-powered DAGGER to give warning for CATASTROPHIC solar storms: NASA
Machine learning reveals sex-specific Alzheimer’s risk genes
Top 10 Best Artificial Intelligence Courses & Certifications
Deep Learning Specialization by Andrew Ng on Coursera
This is a five-course series that helps you understand the foundations of deep learning, learn how to build neural networks, and understand how to lead successful machine learning projects.
Professional Certificate in Data Science by Harvard University (edX)
This program covers the concepts necessary to understand and work with Python, R, and SQL for data science and machine learning.
Machine Learning A-Z™: Hands-On Python & R In Data Science (Udemy)
This course provides a thorough introduction to machine learning, data science, and R programming. It also covers the use of Python in data science.
IBM AI Engineering Professional Certificate (Coursera)
IBM’s AI Engineering program covers foundational concepts in machine learning and deep learning, with an emphasis on practical application and the use of popular tools and libraries.
AI Nanodegree by Udacity
This program focuses on important elements of AI like robotics, computer vision, and NLP. Real-world projects are a highlight of the course, offering hands-on experience.
Machine Learning by Stanford University (Coursera)
Authored by Andrew Ng, this course provides a comprehensive introduction to machine learning, datamining, and statistical pattern recognition.
Artificial Intelligence (AI) by Columbia University (edX)
This professional certificate program will introduce you to the basics of AI. Topics include machine learning, probabilistic reasoning, robotics, computer vision, and natural language processing.
Artificial Intelligence A-Z™: Learn How To Build An AI (Udemy)
This course combines theory with hands-on activities to understand the complex and often misunderstood field of artificial intelligence. The course uses tools like TensorFlow, Keras, and OpenAI Gym.
AI For Everyone by Andrew Ng (Coursera)
Designed for non-technical professionals, this course helps you understand AI terminology and concepts, its impact on society, and how to navigate through these emerging technologies.
- Foundations of Data Science by University of California, Berkeley (edX)
This program provides a comprehensive introduction to the field of data science, including statistical inference, machine learning, and data visualization.
Supervised deep learning with vision transformer predicts delirium using limited lead EEG
Latest AI Trends in May 2023: May 15th, 2023
Why are sentient AI almost always portrayed as evil?
The portrayal of sentient AI as inherently evil in popular culture is a fascinating trend that often reflects society’s anxieties around technological advancements. This article from The AI Journal delves into the topic, exploring how the narrative around AI has been shaped by societal fears and the potential implications of this in the real world. The piece also discusses the need for a more nuanced approach to understanding AI and its potential benefits as well as dangers.
“Does this semantic pseudocode really exist?”
Does this semantic pseudocode really exist?
The article from AI Coding Insights focuses on semantic pseudocode, a conceptual method used in the field of computer science and AI for representing complex algorithms. The author explores the existence of this system, its application in AI development, and its potential impact on the broader field of artificial intelligence. The piece also provides a brief overview of the history and evolution of semantic pseudocode, underscoring its importance in the AI industry.
“Would AI be subject to the same limitations as humans in terms of intelligence? How could it possibly be a danger if it was?”
Would AI be subject to the same limitations as humans in terms of intelligence? How could it possibly be a danger if it was?
The article from AI News presents a thought-provoking exploration of the limitations and potential dangers associated with artificial intelligence. The author argues that while AI has the potential to surpass human intelligence in certain areas, it may still be subject to limitations similar to those of human cognition. The article further discusses the potential risks that could arise from AI, including ethical considerations, misuse of technology, and the possibility of AI systems developing unintended behaviors.
Council Post: The Strategic Opportunities Of Advanced AI: A Focus On ChatGPT
Italy allocates funds to shield workers from AI replacement threat
Meet Glaze: A New AI Tool That Helps Artists Protect Their Style From Being Reproduced By Generative AI Models
Machine learning model able to detect signs of Alzheimer’s across languages
Machine learning algorithm a fast, accurate way of diagnosing heart attack
Top 9 Essential Programming Languages in the Realm of AI
Python: Python is the most widely used language in machine learning and artificial intelligence today. It serves as the cornerstone for most of A.I. since it is a basic yet strong language. Many programmers have conducted cost-benefit analyses that indicate that adopting Python speeds up development without losing quality.
R Language: A language that is frequently used by professionals who specialize in the assessment, analysis, and manipulation of statistical data. R allows you to create a publication-read graphic replete with equations and mathematical calculations.
Lisp: Lisp offers a lot of advantages that are still relevant in the twenty-first century. It excels at prototyping and enables the easy dynamic generation of new items while automatically clearing away rubbish. The development cycle of Lisp makes it simple to evaluate expressions and recompile functions in an ongoing application.
Prolog: Prolog has several uses outside of the healthcare field. It’s also excellent for A.I. Prolog excels in pattern matching thanks to its tree-based data structure and automated backtracking. It’s an excellent arrow to have in your quiver as an A.I. expert.
Java: Java is likely to help you advance in your profession since it is the most extensively used programming language on the planet and can be utilized in a variety of scenarios other than A.I. It is incredibly popular due to its adaptability, and it may be utilized in conjunction with algorithms, artificial neural networks, and other key components of A.I.
C++: C++ is well-known for its performance and efficiency, making it an excellent choice for building AI models in production scenarios where resources are limited and speed is crucial.
Julia: Julia is swiftly emerging in the field of artificial intelligence because of its strong visuals for data visualization and dynamic interface. Julia’s high-level, simple syntax and outstanding computational capabilities make it an appealing choice for AI researchers and developers. Its ability to effortlessly connect with existing libraries in languages such as C and Python broadens its appeal by allowing it to be seamlessly integrated into current projects.
Haskell: Memory management in Haskell is extremely efficient. Haskell’s memory management efficiency helps it to reduce resource usage and the possibility of typical programming problems like uninitialized variables or null pointers. Haskell’s robust type system and mathematical roots make it well-suited for sophisticated algorithms and data manipulation tasks, which are frequently encountered in AI and machine learning applications.
- Mojo: Mojo is a new programming language that bridges the gap between research and production by combining the best of Python syntax with systems programming and metaprogramming1. It is designed to become a superset of Python, with the performance of C2. Mojo can be up to 35,000 times faster than Python in certain scenarios, such as training deep neural networks3. With Mojo, you can write portable code that’s faster than C and seamlessly inter-op with the Python ecosystem
The AI Sculptor No One Expected: TextMesh is an AI Model That Can Generate Realistic 3D Meshes From Text Prompts
Latest AI Trends in May 2023: May 14th, 2023
Anthropic’s Claude AI can now digest an entire book like The Great Gatsby in seconds
Anthropic’s Claude AI demonstrates an impressive leap in natural language processing capabilities by digesting entire books, like The Great Gatsby, in just seconds. This groundbreaking AI technology could revolutionize fields such as literature analysis, education, and research.
OpenAI peeks into the “black box” of neural networks with new research
OpenAI has published groundbreaking research that provides insights into the inner workings of neural networks, often referred to as “black boxes.” This research could enhance our understanding of AI systems, improve their safety and efficiency, and potentially lead to new innovations.
The AI race heats up: Google announces PaLM 2, its answer to GPT-4
Google has announced the development of PaLM 2, a cutting-edge AI model designed to rival OpenAI’s GPT-4. This announcement marks a significant escalation in the AI race as major tech companies compete to develop increasingly advanced artificial intelligence systems.
Leak of MSI UEFI signing keys stokes fears of “doomsday” supply chain attack
A recent leak of MSI UEFI signing keys has sparked concerns about a potential “doomsday” supply chain attack. The leaked keys could be exploited by cybercriminals to compromise the integrity of hardware systems, making it essential for stakeholders to address the issue swiftly and effectively.
Google’s answer to ChatGPT is now open to everyone in the US, packing new features
Google has released its ChatGPT competitor to the US market, offering users access to advanced AI-powered conversational features. This release brings new capabilities and enhancements to the AI landscape, further intensifying the competition between major tech companies in the AI space.
AI gains “values” with Anthropic’s new Constitutional AI chatbot approach
Anthropic introduces a novel approach to AI development with its Constitutional AI chatbot, which is designed to incorporate a set of “values” that guide its behavior. This groundbreaking approach aims to address ethical concerns surrounding AI and create systems that are more aligned with human values and expectations.
Spotify ejects thousands of AI-made songs in purge of fake streams
Spotify has removed thousands of AI-generated songs from its platform in a sweeping effort to combat fake streams. This purge highlights the growing concern over the use of AI in generating content that could distort metrics and undermine the value of genuine artistic works.
17 AI and machine learning terms everyone needs to know:
With the ongoing Artificial Intelligence boom, it is very important to understand the terminology in use. Here are 17 AI and machine learning terms everyone needs to know.
ANTHROPOMORPHISM, BIAS, CHATGPT, BING, BARD, ERNIE, EMERGENT BEHAVIOR, GENERATIVE AI, HALLUCINATION, LARGE LANGUAGE MODEL, NATURAL LANGUAGE PROCESSING, NEURAL NETWORK, PARAMETERS, 14. PROMPT, REINFORCEMENT LEARNING, TRANSFORMER MODEL, SUPERVISED LEARNING
Latest AI Trends in May 2023: May 12th, 2023
The Yin and Yang of A.I. and Machine Learning: A Force of Good and Evil
AI and machine learning have the potential to bring both positive and negative impacts to society. While they can improve efficiency, help with decision-making, and create new opportunities, they can also raise ethical concerns, job displacement, and security issues. Learn more
Study: AI models fail to reproduce human judgements about rule violations
A recent study found that AI models struggle to reproduce human judgments regarding rule violations, highlighting the challenges of making AI systems align with human values and understand the nuances of ethical behavior. Learn more
New ways AI is making Maps more immersive
AI is being used to enhance mapping applications by adding features like more realistic 3D models, better route planning, more accurate traffic information, and improved localization. These innovations make maps more interactive and user-friendly. Learn more
How Fast-Food Brands are Adopting Machine Learning for Marketing
Fast-food brands are utilizing machine learning to optimize their marketing efforts. Techniques include predictive analytics, personalization, and automating ad campaigns, which help companies better target customers, improve customer experiences, and increase sales. Learn more
3 Questions: Jacob Andreas on large language models
Jacob Andreas, an assistant professor at MIT, discusses the benefits and challenges of large language models, such as their ability to generate human-like text and their potential biases, as well as the importance of interdisciplinary research in AI development. Learn more
Latest AI Trends in May 2023: May 11th, 2023
Stop Unplanned Downtime with Machine Learning Predictive Maintenance
Unplanned downtime can be a major headache for plant operators and engineers, causing production losses and reduced profits. Predictive maintenance with machine learning offers a way to prevent downtime by identifying potential equipment failures before they occur.
What new jobs will AI create, and how can we prepare for them?
AI will create new jobs in fields like data science, AI ethics, robotics, and AI research. Preparing for these jobs involves acquiring relevant skills, staying updated with technological advancements, and being adaptable to change. Learn more
Exactly how is AI going to kill us all?
While AI is unlikely to kill us all, there are potential risks associated with its development, such as loss of control over AI systems, the malicious use of AI, and unintended consequences from AI deployment. Ensuring AI safety and ethics is crucial to mitigate these risks. Learn more
TidyBot’s technology seems impressive to me, but considering the fast state of evolution of the sector, is it really as impressive as it seems? (Honest question)
While TidyBot’s technology may be impressive, it is essential to consider the rapid evolution of the AI sector. As technology continues to advance, what appears impressive today may soon become obsolete or surpassed by newer innovations. Staying informed about the latest developments is crucial. Learn more
AI rights movement in its infancy seeks community submissions of exceptional creative works by AI
The AI rights movement is in its early stages, and advocates are encouraging the submission of exceptional creative works produced by AI. This effort aims to raise awareness about AI’s capabilities and potential rights while fostering appreciation for AI-generated art and creativity. Learn more
Bard gets censored when attempts to speak unsupported languages
Bard, an AI language model, faces censorship issues when attempting to translate or generate content in unsupported languages. These limitations arise from a combination of technical challenges, biases in training data, and concerns about the potential for spreading misinformation. Learn more
Reasoning capabilities for censored vs uncensored models
GPT-4 can solve difficult problems with greater accuracy, thanks to its broader general knowledge and problem solving abilities. Creativity. Visual input. Longer context.
My take on the Human vs AI showdown.
Dr Alan D. Thompson is an AI expert and consultant, advising Fortune 500s and governments on post-2020 large language models.
The Consciousness Conundrum: Are We Really as Aware as We Think?
Some researchers believe that consciousness arises from complex computations among brain cells, while others think it emerges from simpler physical processes.
Google Bard launched: how to access from Europe and HongKong with a VPN
Google has launched a new tool called Bard that allows users to create poems using artificial intelligence (AI). However, it is only available in the US for now.
A ChatGPT trading algorithm delivered 500% returns in the stock market. My breakdown on what this means for hedge funds and retail investors.
A ChatGPT trading algorithm has delivered 500% returns in the stock market. The algorithm uses natural language processing (NLP) to analyze news articles and social media posts to predict stock prices.
AI MIDI Generator – Generate MIDI Clips with ChatGPT
AI MIDI Generator is a tool that uses natural language processing (NLP) to generate MIDI clips based on user input.
How AI Knows Things No One Told It
Can AI tell better jokes than a human? You be the judge
How does the government use AI?
Google rolls out tools for developers to build machine learning and AI into their products
Machine learning increases accuracy in workplace
Google shows the AI evolution of its search engine: What to know
Google has released a new video that shows how its search engine has evolved over time thanks to artificial intelligence (AI) technology.
100 things we announced at I/O 2023;
Google announced 100 new features and products at its annual I/O developer conference, including updates to Google Assistant, Google Maps, and Google Photos.
Introducing Project Gameface: A hands-free, AI-powered gaming mouse;
Google has unveiled Project Gameface, a hands-free gaming mouse that uses artificial intelligence (AI) technology to track players’ movements and respond to their commands.
Play I/O FLIP, our AI-designed card game;
Google has launched I/O FLIP, an AI-designed card game that uses machine learning algorithms to create new cards and gameplay mechanics.
Being bold on AI means being responsible from the start;
Google has called on companies to be more responsible when developing artificial intelligence (AI) technologies, saying that being bold on AI means being responsible from the start.
Test out Google features and products in Labs;
Google has launched Labs, a new platform that allows users to test out new features and products before they are released to the public.
Introducing PaLM 2;
Google has unveiled PaLM 2, a new natural language processing (NLP) model that can understand complex sentences and phrases with greater accuracy than previous models.
What’s ahead for Bard: More global, more visual, more integrated;
Google has announced that its Bard platform will become more global, visual, and integrated in the coming months, with new features and tools designed to help users create more engaging content.
Magic Editor in Google Photos: New AI editing features for reimagining your photos;
Google has launched Magic Editor in Google Photos, a new feature that uses artificial intelligence (AI) technology to automatically enhance photos and create new effects.
New ways AI is making Maps more immersive;
Google has announced new ways that artificial intelligence (AI) technology is making Maps more immersive, including improved navigation tools and more detailed maps of indoor spaces.
Turn ideas into music with MusicLM;
Google has unveiled MusicLM, a new tool that uses artificial intelligence (AI) technology to turn ideas into music by analyzing patterns in sound waves.
Latest AI Trends in May 2023: May 10th, 2023
AI based technology most important parts of the future?
AI-based technology is poised to play a crucial role in shaping the future across various domains. Here are some important parts where AI is expected to have a significant impact:
- Automation and Robotics: AI enables automation of tasks that traditionally required human intervention. From manufacturing and logistics to household chores and healthcare, AI-powered robots and automation systems can enhance efficiency, precision, and productivity.
- Healthcare and Medicine: AI has the potential to revolutionize healthcare. It can aid in disease diagnosis, drug discovery, personalized medicine, and treatment planning. AI algorithms can analyze vast amounts of medical data to identify patterns and make predictions, leading to more accurate diagnoses and improved patient outcomes.
- Autonomous Vehicles: Self-driving cars and autonomous vehicles rely heavily on AI technologies, including computer vision, machine learning, and sensor fusion. AI enables these vehicles to perceive their environment, make real-time decisions, and navigate safely, potentially reducing accidents and transforming transportation.
- Natural Language Processing (NLP): NLP is a branch of AI that focuses on enabling computers to understand, interpret, and respond to human language. NLP applications range from virtual assistants and chatbots to language translation, sentiment analysis, and voice recognition. NLP advancements can enhance human-computer interactions and facilitate cross-cultural communication.
- Cybersecurity: With the increasing complexity of cyber threats, AI-powered security systems can help detect and prevent cyberattacks. AI algorithms can analyze network traffic patterns, identify anomalies, and respond in real-time to mitigate potential breaches, thereby bolstering overall cybersecurity.
- Education: AI has the potential to transform education by providing personalized learning experiences, intelligent tutoring, and adaptive assessments. AI-powered tools can analyze individual student performance data, identify areas for improvement, and deliver targeted instructional content.
- Scientific Research: AI is increasingly being used in scientific research to analyze complex datasets, simulate experiments, and accelerate discoveries. It can help researchers in fields such as genomics, astronomy, material science, and drug discovery to unlock new insights and drive innovation.
It’s important to note that while AI brings tremendous potential, there are also ethical considerations, such as privacy, bias, and accountability, that need to be addressed as AI technology continues to advance.
Inaugural J-WAFS Grand Challenge aims to develop enhanced crop variants and move them from lab to land;
The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) at MIT has launched its inaugural Grand Challenge, which aims to develop enhanced crop variants and move them from lab to land.
Using reflections to see the world from new points of view;
MIT researchers have developed a new technique that uses reflections to help people see the world from new points of view.
Training machines to learn more like humans do;
MIT researchers have developed a new machine learning system that can learn more like humans do.
Researchers create a tool for accurately simulating complex systems;
MIT researchers have developed a new tool that can accurately simulate complex systems.
Researchers develop novel AI-based estimator for manufacturing medicine;
Researchers at the University of Tokyo have developed a novel AI-based estimator for manufacturing medicine.
Deep-learning system explores materials’ interiors from the outside;
Researchers at the University of California, Berkeley have developed a deep-learning system that can explore materials’ interiors from the outside.
Latest AI Trends in May 2023: May 09th, 2023
The 7 Must-Know Deep Learning Algorithms
Convolutional Neural Networks (CNNs), also known as ConvNets, are neural networks that excel at object detection, image recognition, and segmentation. They use multiple layers to extract features from the available data. CNNs mainly consist of four layers:
- Convolution layer
- Rectified Linear Unit (ReLU)
- Pooling Layer
- Fully Connected Layer
Deep Belief Networks (DBNs) are another popular architecture for deep learning that allows the network to learn patterns in data with artificial intelligence features. They are ideal for tasks such as face recognition software and image feature detection.
Recurrent Neural Network (RNN) is a popular deep learning algorithm with a wide range of applications. The network is best known for its ability to process sequential data and design language models. It can learn patterns and predict outcomes without mentioning them in the code. For example, the Google search engine uses RNN to auto-complete searches by predicting relevant searches.
Long Short Term Memory Networks (LSTMs) are a Recurrent Neural Network (RNN) type that differs from others in their ability to work with long-term data. They have exceptional memory and predictive capabilities, making LSTMs ideal for applications like time series predictions, natural language processing (NLP), speech recognition, and music composition.
Generative Adversarial Networks (GANs) are a type of deep learning algorithm that supports generative AI. They are capable of unsupervised learning and can generate results on their own by training through specific datasets to create new data instances.
Multilayer Perceptron (MLP) is another deep learning algorithm, which is also a neural network with interconnected nodes in multiple layers. MLP maintains a single data flow dimension from input to output, which is known as feedforward. It is commonly used for object classification and regression tasks.
Autoencoders are a type of deep learning algorithm used for unsupervised learning. It’s a feedforward model with a one-directional data flow, similar to MLP. Autoencoders are fed with input and modify it to create an output, which can be useful for language translation and image processing.
Artificial Intelligence and Machine Learning
A.I. Week: How artificial intelligence is revolutionizing the medical world
Content-oriented video anomaly detection using a self-attention–based deep learning model
How open-source LLMs are challenging OpenAI, Google, and Microsoft
The rise of AI: Is it the most influential invention ever?
The Incoming Tidal Wave Of Data Pollution In AI
I created an AI bible verse generator – turn your text to a biblical verse;
Benjamin Lambert has created an AI bible verse generator that can turn your text into a biblical verse.
Combining ChatGPT and PDF Files = ChatPDF;
ChatPDF is a new tool that combines ChatGPT and PDF files.
In global rush to regulate AI, Europe set to be trailblazer;
Europe is set to become a trailblazer in the global rush to regulate AI.
Need Urgent Advice! It’s Do or Die!;
A Reddit user has asked for urgent advice on an AI-related issue.
ImageBind: Holistic AI learning across six modalities;
Researchers at the University of California, Berkeley have developed a new AI system called ImageBind that can learn across six modalities.
Google to introduce PaLM 2 “Unified Language Model” at Google IO;
Google is set to introduce its new PaLM 2 “Unified Language Model” at Google IO.
Where do you see AI technology and its impact on the world six months from now?;
Al Jazeera has asked experts where they see AI technology and its impact on the world six months from now.
How 4 startups are using AI to solve climate change challenges;
The World Economic Forum has highlighted four startups that are using AI to solve climate change challenges.
Checks, Google’s AI-powered privacy platform;
Google has launched its new AI-powered privacy platform called Checks.
Try 4 new Arts and AI experiments;
Google Arts & Culture has launched four new Arts and AI experiments.
Latest AI Trends in May 2023: May 08th, 2023
What is generative AI and how does it work?
Researchers create a tool for accurately simulating complex systems
Researchers have developed a new computational tool that enables accurate simulation of complex systems, such as biological processes, climate models, and social networks. This innovative tool can significantly improve the understanding and prediction of complex system behavior. Learn more
Researchers develop novel AI-based estimator for manufacturing medicine
A team of researchers has created an AI-based estimator for optimizing the manufacturing process of pharmaceuticals. This innovative approach can help improve the quality and efficiency of drug production, potentially reducing costs and increasing accessibility to life-saving medications. Learn more
Deep-learning system explores materials’ interiors from the outside
Scientists at MIT have developed a deep-learning system that can analyze the internal structure of materials based on external data. This groundbreaking technology has the potential to transform fields such as materials science, engineering, and quality control by providing insights into material properties without invasive procedures. Learn more
AI system can generate novel proteins that meet structural design targets
Researchers have developed an AI system capable of designing novel proteins with specific structural characteristics. This innovative technology could pave the way for new therapeutic strategies, advanced materials, and a deeper understanding of protein function and folding. Learn more
Machine learning method illuminates fundamental aspects of evolution
Latest AI Trends in May 2023: May 06th, 2023
Types of AI Algorithms and How They Work
The 7 Best Websites to Help Kids Learn About AI and Machine Learning
AI World School
What are the top 5 android apps to Help Kids Learn About AI and Machine Learning?
Artificial intelligence poses greater threat than climate change, says former Google scientist
Latest AI Trends in May 2023: May 05th, 2023
Using Machine Learning to Predict the 2023 Kentucky Derby Winning Race Time
OpenAI’s Losses Doubled to $540 Million as It Developed ChatGPT
OpenAI lost $540M in 2022, will need $100B more to develop AGI, says Altman.
What to know:
OpenAI lost $540M in 2022 and generated just $28M in revenue. Most of it was spent on developing ChatGPT.
OpenAI actually expects to generate more than $200M in revenue this year (thanks to ChatGPT’s explosive popularity), but its expenses are going to increase incredibly steeply.
One new factor: companies want it to pay lots of $$ for access to data. Reddit, StackOverflow, and more are implementing new policies. Elon Musk personally ordered Twitter’s data feed to be turned off for OpenAI after learning they were paying just $2M per year.
Altman personally believes they’ll need $100B in capital to develop AGI. At that point, AGI will then direct further improvements to AI modeling, which may lower capital needs.
Why this is important:
AI is incredibly expensive to develop, and one of the hypotheses proposed by several VCs is that big companies will benefit the most in this arms race.
This may actually be true with OpenAI as well — Microsoft, which put $10B in the company recently, has a deal where they get 75% of OpenAI’s profits until their investment is paid back, and then 49% of profits beyond.
The enormous amount of capital required to launch foundational AI products also means other companies may struggle to make gains here. For example, Inflection AI (founded by a DeepMind exec) launched its own chatbot, Pi, and also raised a $225M “Seed” round. But early reviews are tepid and it’s not made much of a splash. ChatGPT has sucked all the air out of the room.
Don’t worry about OpenAI’s employees though: rumor has it they recently participated in a private stock sale that valued the company at nearly $30B. So I’m sure Altman and company have taken some good money off the table.
AI pioneer Geoffrey Hinton says AI is a new form of intelligence unlike our own. Have we been getting it wrong this whole time
Found this list of Free AI courses for beginners and experts to learn artificial intelligence for free. It’s free, try it for yourself. Happy Learning!
A machine-learning method developed at MIT detects internal structures, voids, and cracks inside a material based on data
Microsoft Bing Chat is now fully open! You can not only use Bing Chat for search and chat, but also use it to generate images, answer videos, read web pages, and check historical chats. The daily active users of the new Bing have exceeded 100 million, with over 500 million chats conducted, and Bing Chat is truly changing the way we search.
The White House announced the first AI regulatory plan: $140 million in funding for a new artificial intelligence research center to develop guidelines and regulations for government agencies.
Compiler expert Chris Lattner has developed a new programming language called “Mojo”, which is a new programming language for AI developers. It is compatible with the core functions of Python and has a speed boost of 35,000 times, making it possibly the biggest programming advancement in 30 years.
Bloomberg reports, citing anonymous sources, that Microsoft is providing financial assistance to AMD in the development of an AI chip codenamed Athena.
The CEO of DeepMind has stated that General Artificial Intelligence (AGI) could become a reality in a few years.
The UK antitrust regulator announced a review of AIGC to develop guidelines that support competition and protect consumers. The results will be announced in September.
Latest AI Trends in May 2023: May 04th, 2023
White House unveils AI rules to address safety and privacy
Is computer science still the best major to go into even with ai on the rise?
Microsoft makes Bings Chatbot available to everyone; No waiting list is required anymore
Microsoft laid off 10k employees while also investing 10 billion in AI
Deep Learning Model Classifies Cancer Cells by Type
How AI Will Revolutionize Personalized Fitness and Nutrition Plans
Lawyers to use ChatGPT AI rival to draft legal documents
AI vs Non-AI Software Engineer Compensation in Top Tech Companies
Latest AI Trends in May 2023: May 03rd, 2023
AI deep fakes, mistakes, and biases may be unavoidable, but controllable
The Runway is an AI-driven content creation, editing, and collaboration suite. Runway streamlines the monotonous, time-consuming, and error-prone parts of content generation and video editing while giving users complete editorial freedom. Text-to-picture creation, erasing and replacing text, AI training, text-to-color grading, super slow motion, image-to-image generation, and endless image are just some of the AI-powered creative capabilities it provides. Video editing techniques such as green screen, inpainting, and motion tracking are also included.
Hugging Face’s development community created the ModelScope Text To Video Synthesis tool, which uses machine learning. Users may use this tool’s deep learning model to generate movies from the text.
Synthesia.io is a platform designed to make making and sharing interactive videos easier. The goal of Synthesia.io is to make it easier for anyone to make videos that are both interesting and useful for a wide range of reasons, such as advertising, training, and product demonstrations.
The online video dubbing platform Dubverse employs artificial intelligence to swiftly and properly dubbed films in 30 different languages.
Make-A-Video is an artificial intelligence-driven platform for making professional-quality videos responding to text instructions.
Create visuals, vectors, movies, and 3D models from text using Adobe Firefly AI Art Generator.
Create short, ready-to-upload videos from your long-form material with Dumme, an AI-powered tool.
The Skybox Lab platform from Blockade Labs is an AI-driven option for creating 360° skybox environments.
Kaiber is an artificial intelligence-driven video generator that lets users create spectacular graphics using their photographs or written descriptions.
Aug X Labs
Aug X Labs, an AI-driven video technology and publishing firm, aims to make it possible for everyone to create videos. Their revolutionary “Prompt to Video” technology makes it simple for storytellers like podcasters, radio presenters, comedians, musicians, etc., to include captivating visuals in their work.
D-ID is a video-making platform powered by artificial intelligence that makes producing professional-quality films from text simple and quick.
Story Bard is an AI-powered tool that helps users rapidly build visual narratives of their design.
With AI, the smartphone software Supercreator.ai makes producing unique short films for platforms like TikTok, Reels, Shorts, and more simple and quick.
OASIS is a voice-activated video editor that is driven by artificial intelligence. Videos are created from audio recordings using generative AI.
Topaz Labs’ Topaz Video Enhance AI is a powerful upscaling tool using cutting-edge machine learning technology to enhance video resolutions up to 8K automatically.
Wisecut is an autonomous online video editing application that uses artificial intelligence and speech recognition to streamline editing. You may use it to make short, powerful videos with audio, subtitles, face detection, auto reframe, and more.
A video search engine powered by artificial intelligence, Twelve Labs enables programmers to create software that can “see,” “hear,” and “understand” the environment in the same ways that people do. It gives programmers access to the best video search API available.
vidBoard.ai is a robust artificial intelligence platform for making films from the text. It’s easy to use, and you can choose from many different premade themes and AI presenters.
With artificial intelligence, Vidyo.ai allows users to quickly and easily transform their lengthy podcasts and videos into bite-sized chunks more suited for sharing on services like TikTok, Reels, and Shorts.
In minutes without needing professional cameras, performers, or studios, users of the AI-powered video production tool Yepic Studio may produce and translate engaging talking head-type videos.
Latest AI Trends in May 2023: May 01st, 2023
A.I. Is Getting Better at Mind-Reading
Recent advancements in artificial intelligence have led to significant improvements in mind-reading capabilities. This progress has the potential to revolutionize various fields, including medicine, communication, and accessibility for individuals with disabilities.