Latest AI Trends in June 2023
Welcome, dear readers, to another fascinating edition of our monthly blog: “Latest AI trends in June 2023”. It’s no secret that AI is reshaping every facet of our lives, from how we communicate to how we work, play, and even think. In our latest blog, we’ll be your navigators on this complex journey, offering a digestible breakdown of the most groundbreaking advancements, compelling discussions, and controversial debates in AI for June 2023. We’ll shed light on the triumphs and the tribulations, the pioneers and the prodigies, the computations and the controversies.
Latest AI trends in June 2023: June 30th 2023
Meta provided insights into its AI systems
- Meta has released system cards that provide insight into the AI systems used on Facebook and Instagram, offering transparency to users.
- The system cards explain the AI systems’ functions, data reliance, and customizable controls across various sections of the apps.
- The move aims to address criticism about Meta’s transparency and provide users with a clearer understanding of how content is served and ranked on the platforms.
An AI will teach at Harvard next semester
- Harvard University’s popular coding course, CS50, will be taught by an AI instructor to approximate a 1:1 teacher-student ratio.
- CS50 professor David Malan stated that they are experimenting with GPT-3.5 and GPT-4 models for the AI teacher, aiming to provide personalized learning support.
- While acknowledging potential limitations, the AI instructor is expected to reduce time spent on code assessment and allow more meaningful interactions between teaching fellows and students.
The idea is to simulate a personalized teaching experience, although the experimental nature of AI-driven instruction raises some concerns.
-
The AI teaching initiative was announced by CS50’s professor, David Malan.
-
The course is trialing the use of GPT-3.5 and GPT-4 AI models.
Reliability of AI and its Impact on Students: Uncertainties surrounding the ability of AI to consistently produce high-quality code cast this new teaching methodology as experimental, with the students essentially serving as subjects of the experiment.
-
Concerns are raised over the potential inability of GPT-3.5 and GPT-4 models to consistently output well-structured code.
-
Thus, the decision to deploy an AI teacher is seen as somewhat experimental.
AI’s Role in EdTech and Course Management: AI’s application in educational technology marks an emerging trend, and it’s anticipated to help alleviate the workload of course staff.
-
CS50 is highly popular on edX, a large-scale online learning platform developed in a partnership between MIT and Harvard.
-
While acknowledging the potential for AI to underperform or make mistakes, especially in its early stages, Malan asserts that AI will help reduce staff workload in managing the course, thereby freeing them for direct student interaction.
Can Your Body’s Response to Music Predict Hit Songs? A New AI Study Claims It Can
Machine learning model detects heart attacks faster and more accurately than current methods
Microsoft introduces the First Professional Certificate on Generative AI
Yesterday, Microsoft launched a new AI Skills Initiative that promises to revolutionize technical skill training and bridge the workforce gap. This initiative is backed by some of Microsoft’s biggest philanthropists and is part of their larger vision to democratize AI skills and create a public that is ready for the AI movement.
Key highlights:
-
As part of the initiative, Microsoft is introducing what it calls the First Professional Certificate on Generative AI in online learning. This will be a game-changer in the field of online AI education.
-
The initiative includes a global grant challenge, free online courses accessible here, and a specialized toolkit for teachers.
-
This challenge will support organizations, including nonprofits, social enterprises, and academic institutions,
Why you should care: -
According to the World Economic Forum, AI skills are ranked the “third-highest priority for companies training strategies.” Becoming well versed in generative AI can give you a huge leg up in the professional world.
-
By creating the first Professional Certificate on Generative AI, Microsoft is providing accessible, quality education in this emerging field.
-
This is a great move by them to bring free education into a space that is so new for most people. You can learn more and apply here.
The recent update to the ChatGPT app on iOS now allows paid users to access information from Microsoft’s Bing. This feature is available to subscribers of the $20 per month ChatGPT Plus plan.
The integration, announced after Microsoft’s multibillion-dollar investment in OpenAI, is currently in beta for Plus users in the ChatGPT web app. The free version of ChatGPT can only surface information up to 2021.
To use Bing on the iOS app, users need to enable the Browsing option in the “New Features” section, select GPT-4 from the model switcher, and then choose “Browse with Bing”. An Android version of the app is expected soon.
Things to keep in mind, this upgrade will
Enhanced User Experience: The integration of Bing into the ChatGPT app will provide users with real-time, up-to-date information, enhancing the overall user experience.
Monetization Strategy: By making this feature available only to ChatGPT Plus users, OpenAI is encouraging more users to subscribe to the paid plan, which can increase their revenue.
Microsoft-OpenAI Partnership: This move further solidifies the partnership between Microsoft and OpenAI. It’s a clear indication of how Microsoft’s investment is influencing the development of ChatGPT.
Competitive Advantage: The integration of a search engine into an AI chatbot is a unique feature that can give ChatGPT a competitive edge over other AI chatbots in the market.
Future Developments: The announcement of an upcoming Android version of the app shows OpenAI’s commitment to expanding its user base and making its technology accessible to a wider audience.
MotionGPT: Human Motion as a Foreign Language
MotionGPT, is an innovative motion-language model, designed to bridge the gap between language and human motion. Paper Page here. (Full 21 page PDF here.).
Key takeaways:
– Unified Model for Language and Motion: Built on the premise that human motion displays a “semantic coupling” similar to human language, MotionGPT combines language data with large-scale motion models to improve motion-related tasks.
– Motion Vocabulary Construction: MotionGPT utilizes “discrete vector quantization” (breaking down into smaller parts) for human motion, converting 3D motion into motion tokens-pretty much the way words are tokenized. This “motion vocabulary” allows the model to perform language modeling on both motion and text in a consolidated way, thereby treating human motion as a specific language.
– Multitasking Powerhouse: The model isn’t just good at one thing; it’s proficient at multiple motion-related tasks, such as motion prediction, motion completion, and motion transfer.
Why you should know:
AR/VR, animation, and robotics could be changed forever with the ability to input natural language descriptions of motion. Imagine you are a game developer and you want your in game character to do a double backflip and you had the ability to type that into fruition.
Or imagine a virtual character flawlessly replicating the choreography described in a script, or a robot performing complex tasks with instructions provided in simple natural language. That’s the promise of MotionGPT.
Valve has reportedly blocked the use of artificial intelligence-generated artwork in submitted content due to copyright concerns. This follows an incident where a game developer had a Steam game page submission rejected as it contained AI-created artwork seemingly based on copyrighted material.
AI-Generated Art Rejection: The Reddit user potterharry97 had a game submission on Steam rejected because the game included artwork generated by AI which appeared to be based on copyrighted third-party material.
-
This information was shared by potterharry97 in a subreddit for game development.
-
The rejected game had several assets generated by an AI system called Stable Diffusion.
Valve’s Response and Concerns: The use of AI-created artwork triggered alerts from a Valve moderator due to potential intellectual property rights infringement.
-
Valve reportedly responded to potterharry97 stating that their game contained art assets generated by AI that seemed to use copyrighted material from third parties.
-
Valve stated they could not distribute the game with the current AI-generated assets unless potterharry97 could prove ownership of all the intellectual property rights used in the dataset that trained the AI to create the game assets.
Resubmission and Valve’s Ongoing Concerns: Even after making adjustments to the artwork, the game submission was still rejected by Valve, expressing continued concerns about copyright infringement.
-
Potterharry97 made edits to the game art to minimize signs of AI usage and resubmitted the game.
-
Despite these changes, Valve responded stating they were still declining to distribute the game due to unresolved questions about the rights to the training data used by the underlying AI technology.
Daily AI Update News from Salesforce, Databricks, Microsoft, OpenAI, and Oracle
-
Salesforce Introduces XGen-7B, a new 7B LLM trained on up to 8K sequence length for 1.5 Trillion tokens.
– It is open-sourced under Apache License 2.0 and has the same architecture as Meta’s LLaMA models, except for a different tokenizer.
– On standard NLP benchmarks, it achieves comparable or better results when compared with state-of-the-art open-source LLMs- MPT, Falcon, LLaMA, Redpajama, OpenLLaMA of similar model size.
-
Databricks launches LakehouseIQ and Lakehouse AI tools
– The data and AI company launched LakehouseIQ, a generative AI tool democratizing access to data insights.
– It announced new Lakehouse AI innovations aimed at making it easier for its customers to build and govern their own LLMs on the lakehouse.
-
Microsoft announces AI Skills Initiative
– Includes free coursework developed with LinkedIn, a new open global grant challenge, and greater access to free digital learning events and resources.
-
Introducing OpenAI London
– OpenAI announces its first international expansion with a new office in London, UK.
-
Oracle taps generative AI to streamline HR workflows
– Announced new generative AI features for its Fusion Cloud Human Capital Management (HCM) offering, making it easier for enterprises to automate time-consuming HR workflows and drive productivity.
-
A new app on the Microsoft Store brings the power of ChatGPT to Clippy
– Clippy by FireCube uses OpenAI to empower a Clippy assistant that sits on your desktop. Just like the old Clippy, it can help with writing letters, but can also do so much more.
-
Salesforce to invest $4 billion in UK on AI innovation over the next five years
– The company said the plan builds on a previous five-year injection of $2.5 billion it set out in 2018. -
The famous gaming company, Valve, is not taking any AI-generated artwork for all the uploads for Steam, with its policies centering on owning all of the assets that are uploaded on the platform. Its developer shared their story on Reddit which details his journey about Valve’s rejection and the message that comes along with it.
-
Microsoft President Brad Smith on Thursday talked up the benefits of regulating artificial intelligence and how the U.S. software giant can help, reiterating a message to a Brussels audience that he delivered in Washington last month.
-
OpenAI and its major backer Microsoft, are facing a $3 billion lawsuit alleging the theft of personal information for training their AI models. The lawsuit, filed by sixteen pseudonymous individuals on Wednesday in federal court in San Francisco, claims that the companies’ AI products based on ChatGPT collected and disclosed personal information without proper notice or consent.
-
AI text generators like ChatGPT, Bing AI chatbot, and Google Bard have been getting a lot of attention lately. These large language models can create impressive pieces of writing that seem totally legit. But here’s the twist: a new study suggests that we humans might be falling for the misinformation they generate.[4]
Latest AI trends in June 2023: June 28th 2023
Gamifying medical data labeling to advance AI
Scared tech workers are scrambling to reinvent themselves as AI experts
Faced with employment insecurity in the tech industry, many tech professionals are scrambling to reinvent themselves as AI experts, considering the surge in demand and high pay in the AI sector.
Scramble to Become AI Experts:
AI is emerging as a vital tech role in Silicon Valley, prompting tech workers to emphasize their AI skills amidst a volatile job market.
-
A shift in focus towards AI technology is causing professionals to highlight their AI expertise during job hunting.
-
The overall decrease in demand for non-AI tech jobs has resulted in job insecurity.
AI: The Attractive Investment:
Despite cutbacks in tech, investments keep pouring into AI, creating higher demand, improved pay, and better perks for AI specialists.
-
The tech industry continues to invest heavily in AI, presenting lucrative opportunities for those skilled in AI.
-
AI professionals are being compensated more, leading many to consider transitioning to AI roles.
-
Possessing AI skills provides a significant advantage during salary negotiations.
The Transition to AI:
In response to the rising demand for AI, tech workers are exploring different avenues to gain AI skills, including on-the-job training, boot camps, and self-education.
-
Tech professionals from other fields are looking to reposition themselves towards AI-focused roles.
-
Many are opting for boot camps or other forms of training to acquire AI skills.
-
Hands-on experience with AI systems is often seen as the best learning approach.
The Vatican has released a comprehensive guide on AI ethics. The document, a product of a newly formed entity called the Institute for Technology, Ethics, and Culture (ITEC), aims to offer guidance to tech companies navigating ethical challenges in AI, machine learning, and related areas.
Forming ITEC and the AI ethics handbook
-
The collaboration between Pope Francis and Santa Clara University resulted in ITEC and its first undertaking: “Ethics in the Age of Disruptive Technologies: An Operational Roadmap”.
-
This guidebook aims to help tech companies deal with ethical challenges in AI and other advanced technologies.
ITEC’s unique approach
-
Rather than waiting for governmental regulation, ITEC proposes proactive guidance for tech companies grappling with AI’s ethical questions.
-
The handbook promotes building values and principles into technology from the inception stage, rather than addressing issues retrospectively.
Guidelines and actionable steps
-
The handbook provides an overarching principle: “Our actions are for the Common Good of Humanity and the Environment”.
-
This principle is broken down into seven guidelines, including “Respect for Human Dignity and Rights” and “Promote Transparency and Explainability”, which further translate into 46 actionable steps.
-
The guidebook details how to implement these principles and guidelines, providing examples, definitions, and specific steps to follow.
OpenAI is facing a class-action lawsuit led by a California law firm for alleged copyright and privacy violations. The suit challenges the use of internet data to train the firm’s technology, arguing that it improperly uses people’s social media comments, blog posts, and other information.
Background of the Lawsuit:
The lawsuit originates from a Californian law firm, Clarkson, which specializes in large-scale class-action suits. Their concern lies in OpenAI’s use of individuals’ online data – comments, blog posts, recipes, and more – for commercial advantage in building their AI models. They claim this practice infringes on copyright and privacy rights of these users.
-
The suit has been filed in the northern district of California’s federal court.
-
OpenAI has not yet commented on the matter.
The Legal Debate:
The lawsuit highlights an unresolved issue around generative AI tools, like chatbots and image generators. These tools use massive amounts of data from the internet to make predictions and respond to prompts. The legality of this data usage for commercial benefit is still unclear.
-
Some AI developers believe this should be considered “fair use”, implying a transformative change of the data, which is a contentious issue in copyright law.
-
The fair use question will likely be addressed in future court rulings.
Legal Challenges for AI Companies:
The current lawsuit is part of a broader trend of legal challenges against AI firms. Several incidents have occurred where companies were sued for the improper use of data in training AI models.
-
Previously, OpenAI and Microsoft faced a class-action lawsuit over using computer code from GitHub to train AI tools.
-
Getty Images sued Stability AI for alleged illegal use of its photos.
-
OpenAI faced another lawsuit for defamation over the content produced by ChatGPT.
In a significant advancement for developers, a new tool, gpt-code-search, was released today that enables you to search your codebase using natural language. This tool is powered by OpenAI’s GPT-4 to streamline code retrieval, understanding, and querying, which significantly increases productivity.
If you want to get smarter in AI, look here first. All the information has been extracted on Reddit for your convenience but you can find the GitHub repo here.
Key Features:
– Efficient: Code search, retrieval, and answering all performed with OpenAI’s GPT-4 function calling.
– Privacy-centric: Code snippets only leave your device when you ask a question and the LLM requires the relevant code.
– Ready-to-use: No need for pre-processing, chunking, or indexing. Get started right away!
– Universal: It works with any code on your device.
Why is it important?
This tool aids in leveraging the power of GPT-4 to scan your codebase, eliminating the need to manually copy and paste code snippets or share your code with another third-party service. The tool addresses these issues by letting GPT-4 identify the most relevant code snippets within your codebase, saving you the need to copy and paste or send your code elsewhere. Notably, it fits right into your terminal, sparing you the need for a new UI or window.
Here are the types of questions you can ask:
– Help with debugging errors and locating the relevant code and files
– Document extensive files or functionalities formatted as markdown
– Generate new code based on existing files and conventions
– Ask general questions about any part of the codebase
Despite a few limitations like the inability to load context across multiple files at once and limited search depth, this tool is a considerable step towards a more efficient coding experience.
For those seeking an even more powerful tool that uses vector embeddings and a more robust search and retrieval system, check out Wolfia Codex, the cloud-based big brother to gpt-code-search.
That’s it!
Large Language Models from OpenAI and other providers like Cohere, harvey.ai, and Hugging Face are advancing what can be predicted from text data in court cases. Like most real-world datasets, legal document collections contain issues that can be addressed to improve the accuracy of any model trained on that data. This article shows that data problems limit the reliability of even the most cutting-edge LLMs for predicting legal judgments from court case descriptions.
Finding and fixing these data issues is tedious, but we demonstrate an automated solution to refine the data using AI. Using this solution to algorithmically increase the quality of training data from court cases produces a 14% error reduction in model predictions without changing the type of model used! This data-centric AI approach works for any ML model and enables simple types of models to significantly outperform the most sophisticated fine-tuned OpenAI LLM in this legal judgment prediction task.
Simply put: feeding your models healthy data is more important than what particular type of model you choose to use!
AI is increasingly helping doctors not only in technical tasks but also in communicating with patients empathetically. AI chatbots are proving to be useful in offering quality responses and showcasing empathy superior to human doctors in some cases.
AI in Human Aspects of Medical Care:
-
AI tools like ChatGPT are being used to communicate with patients more empathetically.
-
For instance, in an encounter with a patient’s family, ER physician Dr. Josh Tamayo-Sarver used ChatGPT-4 to explain a complex medical situation in simpler, more compassionate terms.
-
The tool generated a thoughtful, empathetic response, which helped comfort the patient’s family and save the doctor’s time.
AI in Providing Compassionate Counsel:
-
Dr. Gregory Moore used ChatGPT to counsel a friend with advanced cancer, including breaking bad news and dealing with her emotional struggles.
-
Rheumatologist Dr. Richard Stern uses ChatGPT in his clinical practice to write kind responses to patient emails, provide compassionate replies to patient queries, and manage paperwork.
Reasons Behind the Success of AI in Displaying Empathy:
-
AI tools, unlike humans, are not affected by work stress, insufficient coaching, or the need to maintain work-life balance.
-
AI tools like ChatGPT have proven effective in generating text responses that make patients feel they are receiving empathy and compassion.
Continuing with the exercise of sharing an easily digestible and smaller version of the main updates of the day in the world of AI.
-
Baidu’s Ernie 3.5 beat ChatGPT on multiple metrics
– Baidu said its latest version of the Ernie AI model, Ernie 3.5, surpassed ChatGPT in comprehensive ability scores and outperformed GPT-4 in several Chinese capabilities. The model comes with better training and inference efficiency, which positions it for faster and cheaper iterations in the future. Plus, it would support external plugins. -
Google DeepMind’s upcoming chatbot set to rival ChatGPT
– Demis Hassabis, the CEO of Google DeepMind, announced their upcoming AI system- Gemini, which is poised to outperform OpenAI’s ChatGPT. Unlike its predecessor, GPT-4, Gemini has novel capabilities, including planning and problem-solving. DeepMind is confident that Gemini will rival ChatGPT and establish a new benchmark for AI-driven chatbots. -
Unity’s Game-Changing AI Products for Game Development
– Unity AI announced 3 game-changing AI products:
-
Unity Muse: Text-to-3D-application inside games.
-
Unity Sentis: It lets you embed any AI model into your game/application.
-
AI marketplace: Developers can tap into a selection of AI solutions to build games.
-
OpenAI planning to turn ChatGPT into a “Supersmart personal assistant”.
– The business version of ChatGPT could be equipped with in-depth knowledge of individual employees and their workplaces, providing personal assistance tasks such as drafting emails or documents in an employee’s unique style and incorporating the latest business data. -
Snowflake’s another GenAI push! Reveals LLM-driven Document AI and more at annual conference!
– Document AI is an LLM-based interface designed to enable enterprises to efficiently extract valuable insights from their vast array of documents. It represents a notable milestone in the data industry, revolutionizing the way enterprises derive value from their document-centric assets. -
NVIDIA H100 set new industry standard benchmark for Generative AI in Debut MLPerf
– A cluster of 3,584 H100 GPUs completed a massive GPT-3-based benchmark in just 11 minutes. -
Voicebot is an AI-powered software that allows users to interact using voice without any other form of communication like IVR or chatbot. Voicebot uses Natural Language Processing (NLP) to power its software. Today, we are going to use Dialogflow by Google to understand how one can make such a Voicebot.
- AlphaGo from Google’s DeepMind AI lab made history by defeating a champion player of the board game Go. Now Demis Hassabis, DeepMind’s cofounder and CEO, says his engineers are using techniques from AlphaGo to make an AI system dubbed Gemini that will be more capable than that behind OpenAI’s ChatGPT
- As anyone who’s seen depictions of AI in movies like 2001: A Space Odyssey and Alien will know, you simply don’t put your life control system in the hands of a sentient computer. Now, though, NASA is seemingly going against everything Hollywood has taught us about AI space assistants by developing a system that will allow astronauts to use a natural-language ChatGPT-like interface in space
- A team of researchers, including professors from the University of Montana and UM Western, have found that OpenAI’s GPT-4 scored in the top 1% on the Torrance Tests of Creative Thinking (TTCT), matching or outperforming humans in the creative abilities of fluency, flexibility, and originality.
- Shares of U.S. chipmakers fell on Wednesday following reports that the Biden administration was planning new curbs on the export of computing chips for artificial intelligence to China as early as July.
-
OpenAI’s ChatGPT app can now search the web — but only via Bing
- OpenAI’s ChatGPT app introduces Browsing feature, allowing users to search the web, but only through Bing
- Browsing enables ChatGPT to provide up-to-date information beyond its training data, though limiting its search capabilities to Bing is viewed as a drawback.
- In addition to Browsing, the ChatGPT app now allows users to directly access search results within the conversation.
The provided aritcle discusses the potential advancements and implications of Artificial Intelligence in the year 2073. It highlights several key areas of development and transformation that AI is expected to undergo. These include technological advancements in machine learning and deep neural networks, enhanced automation in various industries, the evolution of personalized AI assistants, the healthcare revolution, ethical considerations, socioeconomic impacts, and the collaborative relationship between humans and AI. The article emphasizes the need for responsible AI development and ethical frameworks to ensure that AI serves as a powerful tool for positive change while prioritizing human well-being.
Latest AI trends in June 2023: June 27th 2023
Look around and you’ll see hundreds of AI tools being pitched as social media, digital marketing, blogging tools, etc. However, most of them are simply web apps with a nice UI and preset prompt over Open AI API. Regardless, there’s quite a few that have stood out to me in terms of AI tools that offer more functionality than content generation. Here’s my top picks for digital marketing and why:
MarketMuse – AI Content Optimization
MarketMuse is a real game-changer when it comes to content strategy and optimization. As a digital marketer, I appreciate the way it uses AI to analyze my website and offer personalized, data-driven insights, making my content planning considerably faster and more efficient. It automates the laborious task of content audits, eliminating the subjectivity often associated with this process. Additionally, MarketMuse’s competitive analysis tool, revealing gaps in competitor content, is particularly insightful. Its Content Briefs are an invaluable resource, providing a clear structure for topics to cover, questions to answer, and links to include, streamlining the content creation process. The AI features of MarketMuse offer a clear edge in optimizing my content strategy.
Plus AI for Google Slides – Presentations for Sales Pitches, Webinars, and Conferences
Plus AI stands out as it intertwines with my Google Slides workflow, rather than offering a final mediocre product like most slide deck generators. It helps co-create presentations with the ‘sticky notes’ feature, which essentially gives prompts for improving and finalizing each slide. A standout feature is ‘Snapshots’, enabling you to plug external data, for example, from different internal web apps into your presentations. I use Plus AI to craft the foundation for my slide deck and then go through each slide to incorporate the right snapshot. It’s free and integrates smoothly with Google Slides and Docs.
GoCharlie – AI Content Generation in Your Brand Voice + Content Repurposing
Helps you churn out anything from blog posts, social media content to product descriptions. What stands out is its ability to learn and replicate your brand voice – it truly sounds like you. The ‘content repurposing’ feature is a godsend for recycling well-performing content for different platforms based on websites, audio files, and videos, saving me a huge chunk of time. It doesn’t hand you off-the-shelf content, it co-creates with you, giving you the autonomy to review, refine and personalise. It’s also got a free trial, and as a user, it’s been a worthwhile addition to my digital marketing toolkit.
AdCreative.ai – AI-Powered Ad & Social Creatives
Having a tool like AdCreative.ai in my digital marketing arsenal is such a game-changer. It employs artificial intelligence to produce conversion-oriented ad and social media creatives in just seconds. Its capacity to generate both visually appealing and engaging creatives, while also incorporating optimized copy, enables me to enhance my advertising campaigns’ click-through and conversion rates significantly. A feature I find especially valuable is its machine learning model which learns from my past successful creatives and tailors future ones to be more personalized and efficient. The scalability is impressive too; whether I need a single creative or thousands in a month, it delivers seamlessly. The ease of use, effectiveness, and time-saving capabilities make this tool an absolute winner in my book.
BrandBastion – AI-Driven Community Management
As a digital marketer, one tool I find incredibly beneficial is BrandBastion. It shines with its AI-driven approach to managing social media conversations around the clock, with impressive precision and speed. The AI here does a fantastic job at identifying harmful comments and hiding them, keeping brand reputation intact. What sets it apart is the balance it strikes between automation and human touch – the AI analyses conversations and alerts human content specialists for any sensitive issue, ensuring nothing gets overlooked. Additionally, the “BrandBastion Lite” platform serves as a centralized space to understand brand sentiment, moderate comments, and engage with followers, making it a breeze to manage all social media conversations in one place.
Contlo – Autonomous Generative Marketing
Contlo stands out as a highly autonomous AI-powered marketing tool that significantly streamlines my marketing efforts. One of its prime strengths is the Generative AI Model that enables creation of contextually relevant marketing materials, including landing pages, emails, and social media creatives. Speaking with the AI through a chat interface simplifies my entire marketing process without having to grapple with a complex UI. I’ve also found the generative marketing workflows to be particularly useful in creating custom audience segments and scheduling campaigns based on dynamic user behavior. Even more, its constant learning and self-improvement based on my usage make it a robust tool that evolves with my marketing needs.
GapScout – AI-Driven Market Insights
The strategic force behind my business decisions is GapScout, a unique AI tool that leverages customer reviews for gaining market insights. Its distinguishing feature is the AI’s ability to meticulously scan and analyze reviews about my company and competitors, revealing potential opportunities and highlighting gaps in the market. This level of scrutiny offers a goldmine of data-driven feedback, helping me improve offers, identify new revenue avenues, and refine sales copy to boost conversion rates. For an edge in the market, GapScout’s competitor surveillance keeps me informed of their activities, saving precious time and effort. It’s an invaluable tool, providing clear, actionable insights that fuel data-backed business growth.
Predis.ai – AI Social Media Management
As a digital marketer, Predis.ai is my go-to tool for generating and managing social media content. The platform’s AI capabilities are quite comprehensive; they’re particularly useful for generating catchy ad copies and visually engaging social media posts, and for transforming product details from my e-commerce catalog into ready-to-post content. The tool’s capability to convert blogs into captivating videos and carousel posts adds a fresh spin to the way I repurpose my content. Plus, it’s a lifesaver when it comes to scheduling and publishing – it integrates seamlessly with multiple platforms and takes care of all posting duties in one place. Predis.ai essentially puts AI in the driving seat of my social media management and I couldn’t be more pleased with the efficiency it offers.
QuantPlus – AI Ad Creative Insights
QuantPlus truly brings AI to the ad creation process in a novel way. Rather than just run multivariate tests, it deconstructs historical ad campaign data to analyze individual elements. Using this data, the tool ranks the performance of various elements such as CTA’s, phrase combinations, imagery content, colors, keywords, and even gender distribution, among many others. It’s like having a super-powered marketing analyst, giving me access to insights about top performing elements and aiding me in making more informed design decisions. This makes the ad creation process not just more efficient, but significantly more effective, as I’m working off proven high-ranking creative elements. It’s an indispensable part of my digital marketing toolkit.
It looks like you can use ChatGPT to bypass paywalls
It probably uses the same mechanism as 12ft.io, where it reads the google-cached version which doesnt have a paywall for seo.
Some pay walls are simply pasted over the graphical interface – The content is technically still there, it just can’t be seen by a standard web browser.
If you press the F12 on a Web browser to go into “developer mode” then you can access the code of a Web page. In some cases, the code for the graphical element of the pay wall can be deleted, allowing normal reading.
I suspect chatgpt simply reads the code for rendering the text in there – it doesn’t care that there’s a bit of code amounting to “if person is not logged in, display an annoying banner saying pay us money” – it simply ignores it.
Most big websites (like Medium, etc.) are smart enough to not load the entire content unless you’re logged in and have a subscription. However, they want their context indexed by google, so the paywall is nonexistent if you change your User-Agent to googlebot. (There’s a lot of extensions on the Web Store).
Just stumbled upon this study from Business Name Generator that says nearly 20% of employees wouldn’t mind their bosses getting the old switcheroo with an AI robot. Sounds “crazy”, right?
Turns out, people are tired of human bosses showing favoritism, lacking empathy, and generally being disorganized. Some of us think a robot could handle the job better and, more importantly, make the workplace drama-free. About a third of us reckon it’s just a matter of time before AI takes over the workplace anyway.
Interestingly, even in sectors like arts and culture, 30% of workers in the UK were down for the idea. Now that’s a plot twist, eh?
It’s a Machine’s World After All?
Seeing this trend was definitely a surprise. I mean, can you imagine a robot doing your performance review or telling you to have that report done by EOD?
However, I get where these folks are coming from. We’ve all had that boss who could make Godzilla seem like a cute puppy. But an AI? Wouldn’t it lack the human touch, the empathy we sometimes need in our work life?
On the flip side, a robot wouldn’t play favorites or thrive on office politics. It’s a tough call. I’m curious to see how the workplace evolves with AI advancements.
What do you guys think? Ready to report to R2D2 or still holding out for human bosses?
Databricks snaps up MosaicML to build private AI models
Who else thinks we see a bunch of M&A over the coming months? This feels like a “gold rush” moment for companies. I’m fascinated by the number of models out there and what consolidation in the space will look like. Regarding the Databricks acquisition a few things stood out to me in terms of impact it could have.
Talent Acquisition – The fact that Databricks is retaining the entire MosaicML team highlights the current high demand for talent in the AI field. Skilled AI professionals are a valuable asset, and this move allows Databricks to absorb a team with expertise.
Expansion of Databricks’ Offerings – The addition of MosaicML to Databricks’ portfolio significantly extends its capabilities in the AI domain. This places Databricks in a stronger position to provide AI solutions to its customers.
Democratization of AI – MosaicML’s focus on enabling organizations to build their own LLMs using their data democratizes access to AI technology. This not only empowers more businesses to leverage AI but also leads to more diverse AI models that can be tailored to specific organizational needs.
Market Consolidation – As more companies recognize the importance of AI, we’re likely to see more mergers and acquisitions. This could accelerate the pace of AI development and increase the competitive pressure on companies in the tech industry.
What are your thoughts on this acquisition? Which other companies are primed acquisition targets?
Since the release of ChatGPT, we have witnessed a rapid development of open-source generative AI and commercial AI systems. This article will explore a new state-of-the-art model called Claude and compare it to ChatGPT across various data science tasks.
Planning
Screenshot by Author | ChatGPT
Screenshot by Author | Claude | poe.com
Problem: In the prompt, we included a dataset description and project goal for building a loan classifier model. Those interested in accessing the dataset and project planning can find them in A Guide to Using ChatGPT for Data Science Projects.
Verdict: Both are Great at project planning, but ChatGPT is slightly better at presenting the information and additional steps.
Programming
Problem: We asked both models to optimize a nested Python loop example.
Verdict: While ChatGPT attempted to optimize the code by storing values in a list, Claude was able to convert the nested loops into list comprehension, resulting in faster execution. Therefore, Claude emerged as the winner.
Data Analysis
Problem: We tasked both models with conducting exploratory data analysis on a loan classification dataset.
Verdict: Although ChatGPT demonstrated strong skills in data analysis, Claude’s proficiency in writing efficient Python code ultimately gave it the edge. While ChatGPT employed a variety of libraries for data analysis, Claude relied solely on the pandas library for data visualization, processing, and analysis, showcasing their mastery of this tool. As a result, Claude emerged as the clear winner.
Machine Learning
Problem: We asked both models to perform detailed model evaluations using cross-validation and assess performance metrics such as accuracy, precision, recall, and F1 score.
Verdict: Claude outperformed ChatGPT in this regard by employing cross-validation for label prediction and subsequently utilizing various metrics to gauge model performance. In contrast, ChatGPT relied on cv_scores
and a separate model to determine classification metrics.
Time Series
Problem: We presented a data description and tasked both models with building a machine learning model for predicting stock prices.
Verdict: Claude demonstrated a better understanding of the task while ChatGPT continuously asked follow-up questions. However, both models excelled at generating code, with ChatGPT resorting to an outdated method using from statsmodels.tsa.arima.model import ARIMA
, while Claude implemented a more advanced approach using GradientBoostingRegressor
. Claude was a winner in this case.
Natural Language Processing
Problem: We asked both models to write a Python code for fine-tuning the GPT-2 model on a new dataset.
Verdict: ChatGPT seemed to have hallucinated and created a new library for fine-tuning the model that didn’t exist. On the other hand, Claude used a transformer library and successfully fine-tuned the model. Therefore, Claude wins this round.
Take a look at the comparison between Bard and ChatGPT for Data Science to understand how Google Bard measures up against ChatGPT in various data science assignments.
Claude vs ChatGPT: The Final Verdict
For data-related tasks that require a deep understanding of technical context and the ability to generate optimized code, Claude is the recommended choice. However, for all other tasks, ChatGPT is the preferred option, especially with its advanced GPT-4 model.
Note: Claude-Instant-100K model is on par with GPT-4 in terms of performance, but it’s not widely available. You can also check out the non-official benchmark results at chat.lmsys.
Practical Applications of Claude and ChatGPT in Data Science
Claude and ChatGPT can provide valuable assistance in various data science tasks, such as:
Extensive project planning
Both tools can assist you in developing a comprehensive project plan. They can also provide insights, methodologies, and tools to help you prepare for the data science project.
Research
With generative AI, You can learn new concepts, languages, and even frameworks. Moreover, they can help you gather information, summarize research papers, and generate content.
Code generation
Both Claude and ChatGPT can generate code snippets for data preprocessing, feature engineering, model training, and evaluation, saving time and effort for data scientists.
Unit testing
You can also automatically generate test cases based on the code and specifications provided.
Debugging
Each tool can provide suggestions and insights into potential errors or issues in code or data pipelines, giving you the chance to spot mistakes and learn how and why they’re impacting your code.
Reporting
ChatGPT and Claude can both understand data analysis results and help you generate analytical data reports that demonstrate your findings.
Optimization
You can optimize Python, SQL, and R code using these tools and also use them to recommend efficient algorithms or techniques to improve your code.
Performing statistical tests
You can generate statistical tests, such as hypothesis testing, ANOVA, t-tests, and regression analysis, based on the provided data and research questions.
Understanding data analysis results
Both AI tools can interpret your results, providing explanations, insights, and recommendations based on statistical findings and visualizations. This can help you understand your findings better and also help explain them to others.
Automating data science tasks
With the help of plugins, you can automate data analysis and various other tasks in data science workflow.
To learn how to use ChatGPT for building end-to-end machine learning projects, check out the Guide to Using ChatGPT For Data Science Projects.
New AI method for graphing scenes from images
Daily AI News 6/27/2023
Latest AI trends in June 2023: June 26th 2023
AI Marketplace – AI App Store
Top Five AI gadgets in 2023
Fuss-free 3D (ZTE Nubia Pad 3D)
The history of 3D entertainment has demonstrated one thing: if consumers have to make any kind of effort – wearing glasses, buying a special cable, seeking out particular formats – they stop caring about it. However, the Nubia (branded in the US as the Leia Lume Pad 2) is a high-spec Android tablet that expertly straddles the 2D and 3D worlds. Its AI-driven face tracking “steers” 3D pictures and videos to the eyes so they’re always in sharp focus regardless of viewing angle. It can present 2D images in 3D by accurately guessing their depth, and its built-in camera captures in 3D, but the resulting images and videos can be shared and viewed in 2D on standard devices. 3D is back – but this time it’s easy. ZTE Nubia Pad 3D, £1,239
A doctor on your wrist (MymonX)
Many people have a sporadic interest in their health, happily assuming that they’re fine until it becomes clear that they’re not. MymonX, worn on the wrist and with a neat touchscreen interface, offers AI-driven confirmation of wellbeing, quietly keeping tabs on heart activity (via an ECG monitor), blood pressure, oxygenation, respiratory rate, temperature, sleep, physical activity and non-invasive glucose monitoring. Those numbers, whether gathered directly or derived via AI, get shunted to Apple’s Health app or Google’s Health Connect – but a £9.99-a-month subscription also gets you a monthly doctor-reviewed health report where notable changes are flagged up. Its ultimate aim: to head off poor health before it happens. MymonX, £249
The learning cycle
You may associate Acer with budget laptops, but it has a subsidiary, Xplova, dedicated to cycling computers, and some of that tech has found its way into this ebike. The ebii (rhymes with “TV”) works in tandem with an app (ebiiGO), using AI modelling to provide more power when you need it based on cycling conditions and your technique. It can also intelligently conserve power to make sure your battery doesn’t die halfway through a journey (a common scenario when you’re enjoying a little too much power assistance). Collision detectors, automated lighting (front, back and sides) and security features (automatic locking when you walk away) make it a perfect urban getabout, and at a lean 16kg it feels more nimble than its heftier competitors. Acer ebii, €1,999
Follow that car
Follow that car Sony a7R V DSLR camera, £3,999 Babies learn the skill of focusing on faces by the time they’re around three months old. Historically, cameras have needed our assistance to accomplish this task, but the AI-driven processor in the newest Sony a7R can recognise the presence of a human face (or body) and keep it in sharp focus. No machine learning happens within the camera itself, but it already knows what certain things look like – specifically humans, animals, insects, birds, trains, planes and automobiles – and prioritises them as you shoot. If you want to override its choices, you can take control with a tap of a button. It’s a fearsomely powerful camera, but a joy to use out of the box, too. Some might say, “It’s not real photography because it’s not difficult enough.” They’re wrong. Sony a7R V, £3,999
Google DeepMind’s CEO says its next algorithm will eclipse ChatGPT
Google’s DeepMind is developing an advanced AI called Gemini. The project is leveraging techniques used in their previous AI, AlphaGo, with the aim to surpass the capabilities of OpenAI’s ChatGPT.
Project Gemini: Google’s AI lab, DeepMind, is working on an AI system known as Gemini. The idea is to merge techniques from their previous AI, AlphaGo, with the language capabilities of large models like GPT-4. This combination is intended to enhance the system’s problem-solving and planning abilities.
-
Gemini is a large language model, similar to GPT-4, and it’s currently under development.
-
It’s anticipated to cost tens to hundreds of millions of dollars, comparable to the cost of developing GPT-4.
-
Besides AlphaGo techniques, DeepMind is also planning to implement new innovations in Gemini.
The AlphaGo Influence: AlphaGo made history by defeating a champion Go player in 2016 using reinforcement learning and tree search methods. These techniques, also planned to be used in Gemini, involve the system learning from repeated attempts and feedback.
-
Reinforcement learning allows software to tackle challenging problems by learning from repeated attempts and feedback.
-
Tree search method helps to explore and remember possible moves in a scenario, like in a game.
Google’s Competitive Position: Upon completion, Gemini could significantly contribute to Google’s competitive stance in the field of generative AI technology. Google has been pioneering numerous techniques enabling the emergence of new AI concepts.
-
Gemini is part of Google’s response to competitive threats posed by ChatGPT and other generative AI technology.
-
Google has already launched its own chatbot, Bard, and integrated generative AI into its search engine and other products.
Looking Forward: Training a large language model like Gemini involves feeding vast amounts of curated text into machine learning software. DeepMind’s extensive experience with reinforcement learning could give Gemini novel capabilities.
-
The training process involves predicting the sequences of letters and words that follow a piece of text.
-
DeepMind is also exploring the possibility of integrating ideas from other areas of AI, such as robotics and neuroscience, into Gemini.
• Gemini is a large language model like GPT-4, the technology powering ChatGPT, but it will integrate techniques used in AlphaGo, another AI system from DeepMind that defeated a Go champion in 2016. This combination aims to give Gemini new capabilities such as planning and problem-solving.
• Gemini will build upon reinforcement learning and tree search methods used in AlphaGo. Reinforcement learning is a technique where software learns by making repeated attempts at challenging problems and receiving feedback on its performance. Tree search is a method used to explore and remember possible moves in a game like Go.
• The development of Gemini is expected to take several months and could cost tens or hundreds of millions of dollars. For comparison, OpenAI CEO Sam Altman stated that the creation of GPT-4 cost over $100 million.
• Once complete, Gemini could play a significant role in Google’s strategy to counter the competitive threat posed by ChatGPT and other generative AI technologies.
• Google has recently combined DeepMind with its primary AI lab, Brain, to create Google DeepMind. The new team plans to boost AI research by uniting the strengths of the two foundational entities in recent AI advancements.
• Google acquired DeepMind in 2014 after it demonstrated impressive results with software using reinforcement learning to master simple video games. Subsequently, DeepMind proved the technique’s ability to perform tasks that seemed uniquely human, often with superhuman skill, such as when AlphaGo defeated Go champion Lee Sedol in 2016.
• The training of a large language model like GPT-4 involves feeding vast amounts of curated text from various sources into machine learning software. An additional step is to use reinforcement learning based on human feedback on an AI model’s answers to enhance its performance. DeepMind’s extensive experience with reinforcement learning could potentially give Gemini novel capabilities.
• DeepMind researchers might also try to augment large language model technology with insights from other areas of AI, such as robotics or neuroscience. Learning from physical experience of the world, as humans and animals do, is considered crucial for enhancing AI’s capabilities.
• Hassabis is responsible for accelerating Google’s AI efforts while managing unknown and potentially severe risks. Despite concerns about the potential misuse of AI technology or the difficulty in controlling it, Hassabis believes the potential benefits of AI in areas like health and climate science make it crucial that humanity continues to develop the technology.
• DeepMind has been examining the potential risks of AI even before ChatGPT emerged. Hassabis joined other high-profile AI figures in signing a statement warning that AI might someday pose a risk comparable to nuclear war or a pandemic.
• One of the main challenges currently, according to Hassabis, is determining the likely risks of more capable AI. He stated that DeepMind might make its systems more accessible to outside scientists to help address concerns that experts outside big companies are becoming excluded from the latest AI research.
Political campaigns are turning to AI to create election materials. For example, an attack ad was posted on Twitter by Ron DeSantis’s campaign team, featuring an AI-generated image of Donald Trump and Dr. Anthony Fauci in a controversial pose.
-
Many viewers quickly noted that the image was fake.
-
Such AI applications have been used from mayoral races to the presidential election.
AI’s Efficiency in Election Campaigns: Artificial intelligence shows promise in engaging voters and soliciting donations. The Democratic National Committee tested AI-generated content and reportedly found it as effective as human-created materials.
-
AI-generated content resulted in good levels of engagement and donations.
-
However, no messages that were AI-generated were attributed to President Joe Biden or anyone else.
AI Mishaps in Campaigns: AI is not flawless and can make notable mistakes. For instance, in Toronto’s mayoral race, conservative candidate Anthony Furey used AI-generated images that contained errors, like a figure with three arms.
-
Despite the mistakes, these images have helped Furey become a more recognizable candidate.
-
The mistakes were used by other candidates to critique Furey.
Concerns about AI and Disinformation: Experts are worried about the potential for AI to spread disinformation. AI tools are becoming more accessible and affordable, which might lead to a chaotic situation where real and fake campaign claims are indistinguishable.
-
AI could be used to target specific audiences with misinformation, particularly swing voters.
-
A Centre for Public Impact report discussed the issue of targeted ads based on user data, as seen in the 2016 US elections.
Responses to AI in Election Campaigns: Not everyone is comfortable with the growing role of AI in election campaigns. The CEO of OpenAI, the organization that created ChatGPT, expressed concerns during a congressional appearance.
-
He acknowledged that people are anxious about how advancing AI could change society.
-
There has been no comment from the DeSantis and Trump campaign teams about the use of AI in their campaigns.
AI chatbots are being utilized to fill junk websites with AI-generated text that draws in advertisers, causing concern about the increasing presence of such content on the web. This practice not only wastes substantial amounts of ad spend but also threatens to accelerate the degradation of internet quality.
The Use of AI in Online Advertising: AI chatbots have found a new purpose: filling low-quality websites with AI-generated content that attracts advertising dollars. Over 140 top brands are unknowingly financing ads displayed on these unreliable AI-created sites. Mostly, these ads are served by Google, contradicting the company’s own rules.
-
These AI-fueled junk websites are exploiting a system called “programmatic advertising,” which allows ads to be placed on various websites automatically to maximize audience reach.
-
This method leads to brands inadvertently funding ads on websites they may not even be aware of.
Content Farms and Made-for-Advertising Sites: These low-quality websites, also known as “made for advertising” sites, are a growing issue. They use tactics such as clickbait, autoplay videos, and pop-up ads to maximize revenue from advertisers. They are now increasingly using generative AI to automate their processes, enabling them to generate more content with less effort.
-
Content farms are taking advantage of the lack of oversight in ad placements to attract substantial revenue.
-
According to a survey, 21% of ad impressions were directed to these made-for-advertising sites, with an estimated $13 billion wasted annually.
-
The proliferation of generative AI is only worsening this situation by allowing more such sites to be created with minimal effort.
Spotting AI-Generated Content: NewsGuard, a media research organization, is identifying these AI-written sites by looking for error messages typical of AI systems, which are then reviewed by a human analyst. The problem is rapidly expanding, with around 25 new AI-generated sites discovered each week.
-
Sites filled with AI-generated content often contain typical AI error messages, which are used by NewsGuard to identify them.
-
The rate of discovery suggests a rapidly growing problem, with these low-quality sites being produced in multiple languages.
Ineffective Advertising Policies: Most ad exchanges and platforms have policies against serving ads on content farms, but these policies are not consistently enforced. Despite Google’s ad policy against “spammy automatically generated content,” 90% of the ads from top brands on these AI-written sites were served by Google.
-
Google’s own policy communications manager reaffirms the company’s strict policies about the type of content that can monetize on their platform.
-
The enforcement of these policies often focuses on content quality, not how it was created. Still, they often fail to detect and block violations effectively.
-
Other ad exchanges are also guilty of serving ads on such sites, even when they seem to be violating quality policies.
Why AI Will Save the World by Marc Andreessen
The era of Artificial Intelligence is here, and boy are people freaking out.
Fortunately, I am here to bring the good news: AI will not destroy the world, and in fact may save it.
First, a short description of what AI is: The application of mathematics and software code to teach computers how to understand, synthesize, and generate knowledge in ways similar to how people do it. AI is a computer program like any other – it runs, takes input, processes, and generates output. AI’s output is useful across a wide range of fields, ranging from coding to medicine to law to the creative arts. It is owned by people and controlled by people, like any other technology.
A shorter description of what AI isn’t: Killer software and robots that will spring to life and decide to murder the human race or otherwise ruin everything, like you see in the movies.
An even shorter description of what AI could be: A way to make everything we care about better.
https://a16z.com/2023/06/06/ai-will-save-the-world/
Credit Card Fraud is one of the biggest scams faced by many government agencies and big companies, with an enormous amount of money involved in these transactions. So, it needs some solution to deal with the loss of billions of dollars.
This can be achieved using Machine Learning that can instantly recognize a fraudulent transaction, and it can at least save some amount of money involved. However, there are many challenges faced by many service providers while developing a solution using AI in finance problems.
Have a look at some of the problems:
The model training in supervised learning requires good quality data. However, due to the privacy policies of the banks in place, they cannot share the data in its direct form for training which raises the issue of Data Availability.
Even though we gain a quality dataset, not violating any of the privacy policies, the dataset would be Highly Imbalanced and thus making it tough to identify the fraudulent transactions from the authentic ones.
https://www.seaflux.tech/blogs/finance-ai-application
Daily AI News 6/26/2023
-
A combination of citizen science and artificial intelligence has been used to prove different populations of the weedy or common seadragon found across their range on the Great Southern Reef are genetically linked.
-
Microsoft co-founder Bill Gates said generative AI chatbots can teach kids to read in 18 months rather than years. AI is beginning to prove that it can accelerate the impact teachers have on students and help solve a stubborn teacher shortage.
-
Samuel L. Jackson is not surprised by the worrying rise of artificial intelligence because, as he claimed, he predicted this trend a long time ago. During an interview with Rolling Stone, the Marvel star shared that he had earlier warned about the tech rise.
-
A U.S. agency will launch a public working group on generative artificial intelligence (AI) to help address the new technology’s opportunities while developing guidance to confront its risks, the Commerce Department said.
-
Microsoft Research has introduced ZeRO++, a system of communication optimization strategies built on top of ZeRO to offer unmatched efficiency for large model training, regardless of batch size limitations or cross-device bandwidth constraints. It includes three techniques that collectively reduce the communication volume of ZeRO by 4x, enabling up to 2.16x better throughput at 384 GPU scale. Moreover, accelerates ChatGPT-like model training with RLHF.
-
New research has proposed RepoFusion, a framework to train models to incorporate relevant repository context. Code assistants like GitHub Copilot 2 often struggle to generalize effectively in unforeseen or unpredictable situations, resulting in undesirable predictions. Instances of such scenarios include code that uses private APIs or proprietary software, work-in-progress code, etc. RepoFusion addresses this issue, and models trained with it significantly outperform several larger models despite being times smaller in size.
-
DragGAN’s source code release– The interactive point-based manipulation method for image editing that received major hype when introduced has released its official code.
-
LinkedIn is increasing its AI use– Its new AI image detector spots fake profiles with 99% success rate + Its upcoming feature will allow users to directly utilize generative AI within the LinkedIn share box.
-
Hugging Face’s version of Whisper gets a new feature– Whisper has added a much-requested new feature: word-level timestamps.
Machine Learning vs. Deep Learning: What’s the Difference?
Machine Learning
-
Can make low/moderate complexity decisions
-
Data features are defined by humans
-
Accuracy improvements by system and humans
-
Uses labeled or unlabeled data
-
Does not use neural networks
-
Requires moderate computer processing power, depending on model complexity and data set
Deep Learning
-
Can make decisions and take actions of high complexity
-
Can discover and define data features on its own
-
Accuracy improvements primarily made by the system
-
Uses labeled or unlabeled data
-
Uses neural networks of 3+ layers (but often 100+)
-
Requires high computer processing power, especially for systems with more layers
An Example of Machine Learning vs Deep Learning
Imagine a system to recognize basketballs in pictures to understand how ML and Deep Learning differ. To work correctly, each system needs an algorithm to perform the detection and a large set of images (some that contain basketballs and some that don’t) to analyze.
- For the Machine Learning system, before the image detection can happen, a human programmer needs to define the characteristics or features of a basketball (relative size, orange color, etc.). Once that’s done, the model can analyze the photos and deliver images that contain basketballs. The more often the model performs this task, the better it should get. A human can also review the results and modify the processing algorithm to improve accuracy.
- For the Deep Learning system, a human programmer must create an Artificial Neural Network composed of many layers, each devoted to a specific task. The programmer doesn’t need to define the characteristics of a basketball. When the images are fed into the system, the neural network layers learn how to determine the characteristics of a basketball on their own. They then apply that learning to the task of analyzing the images. The Deep Learning system assesses the accuracy of its results and automatically updates itself to improve over time without human intervention.
AI Learning Resources
-
The “T” in GPT stands for Transformers. Here’s an a Nvidia explainer on Transformers.
-
GPT-4 is trained using RLHF. Learn how RLHF actually work and why open source RHLF is difficult.
-
Interactive workbooks to combine Generative AI models in one document. I find interactive notebooks to be the best way to learn concepts in programming.
-
Top GitHub open-source Ai Repos – Some of the best are here…
GitHub is a web-based platform that serves as a version control repository and collaborative platform for software development projects. It allows developers to store, manage, and share their code, facilitating collaboration within teams or open-source communities. GitHub provides the latest Ai open-source projects for us to try and collaborate on.
Heres are a few that must check out.
-
Machine Learning (106K repos)
-
Deep Learning (58K repos)
-
Tensorflow (29K repos)
-
Neural Network (16K repos)
-
NLP (23K repos)
-
Latest AI trends in June 2023: June 23rd 2023
AWS launches generative AI program with $100M
Why FIFA Partnered with DeepL to Localize World Football (or Soccer)
I’ve read a lot about the EU’s AI Act (which their Parliament just passed last week, though it’s still a ways off from becoming law) — so this is a fascinating study that looks at a very real question:
Do today’s leading AI models actually comply? And the answer is no.
My full deepdive breakdown is here, but as always I’m summarizing key points below for community discussion!
Why does this matter?
-
The EU AI Act is on its way to becoming law: it’s now in its final stages after passage through parliament, so there’s no way to head off its arrival. Any final changes will be small tweaks.
-
Penalties for non-compliance are serious: fines of the greater of €20,000,000 or 4% of worldwide revenue are possible.
-
Open-source models face the same standards as closed-source models: this includes registration with the EU, transparency requirements, and safety considerations.
-
Other countries will use it as an example: as legislation gets developed in the USA, it’s likely they’ll look to the EU for inspiration.
What did the researchers find?
-
Across 12 key requirements for generative AI, the leading 10 models fell short. Most scored just 50% of the total possible 48 points.
-
Hugging Face’s open-source BLOOM performed the best, securing 36/48 points.
-
OpenAI’s GPT-4 scored 25/48 points, roughly middle of the pack.
-
Anthropic’s Claude scored 7/48 points, just second from the bottom.
Areas of failure were different between closed-source and open-source models:
-
Open-source models generally outperformed in data sources transparency and resource utilization disclosure. Due to their generally transparent releases, this is not surprising.
-
But downstream release risk (once out in the wild) could create regulatory consequences for open-source models, which is where much of the concern currently exists within the community.
-
-
Closed-source models excelled in areas such as comprehensive documentation and risk mitigation.
-
The researchers felt this was largely addressable as even OpenAI feels they can move towards “just enough” transparency to meet the EU’s requirements.
-
What are the issues to watch next here?
-
Many elements of the AI Act remain murky, the researchers argue, so additional clarity is needed. Look out for tweaks to the law as it goes through additional refinement.
-
How open-source and closed-source projects adapt in the next few months will be interesting to observe. OpenAI in particular will have be more open. And open-source projects may have to wrestle with better understanding registration requirements and post-deployment model risks.
Generative AI is facing growing backlash, particularly from the music industry’s Recording Academy. This criticism has led to new guidelines for the Grammy Awards, restricting AI-generated content’s eligibility and maintaining a focus on human creativity.
Recording Academy’s Response: The Recording Academy, which comprises music industry professionals, has updated its rules for the Grammy Awards in response to the rise of generative AI.
-
The new rules stipulate that only human creators are eligible for consideration in the Grammys.
-
The Academy believes that there is nothing “excellent” or creative about AI-generated content.
New Guidelines for AI-Generated Content: Despite its strict stance, the Recording Academy hasn’t banned all AI-generated content.
-
Music productions that contain machine learning elements can still participate, as long as there is meaningful human authorship.
-
Those who provide prompts for AI-generated content are not eligible for nomination.
Changes in Nomination Requirements: The 66th Grammy Awards rulebook introduces new requirements for nominations.
-
Producers, songwriters, engineers, or other artists must contribute to at least 20% of an album to earn a nomination.
Impact on the Entertainment Industry: The use of generative AI is stirring chaos and concerns over job loss and a decline in creative quality in the entertainment industry.
-
While studios favor the technology, creators and artists are fighting to maintain their roles.
-
This has led to actions like the Writers Guild of America strike, and actors’ guild SAG-AFTRA could also follow suit.
What began as a simple test of CGPT’s creativity turned into an art project that went far beyond my expectations. An entirely new tarot deck, new suits, and new meanings issued from CGPT 3.5, and was brought to life through Midjourney using the descriptions the chat had provided.
A video on the Tensorith Tarot project site goes through the process from start to finish.
Generative AI models, including Google’s Bard, OpenAI’s GPT variants, and others, have become widely popular. Despite their popularity, they are prone to inheriting racial, gender, and class stereotypes from their training data. This can adversely affect marginalized groups.
-
These AI models are known to regularly create fabricated information.
-
Although some developers are aware of these issues, the suggested solutions often miss the point. It’s difficult to correct the distortions to human beliefs once they have occurred.
Human Psychology and AI:
Understanding human psychology can provide insights into how these models might influence people’s beliefs.
-
People tend to trust information more when it comes from sources they perceive as confident and knowledgeable.
-
Unlike human interactions, generative AI models provide confident responses without expressing any uncertainty. This could potentially lead to more distortions.
-
Humans often assign intentionality to these models, which could lead to rapid and confident adoption of the information provided.
Exposure to Fabricated Information:
Increased exposure to fabricated information from these models can lead to a stronger belief in such information.
-
As AI models are integrated into daily technologies, the exposure to fabricated information and biases increases.
-
Repeated exposure to biases can transmit these biases to human users over time.
AI Impact on Human Beliefs:
Generative AI models have the potential to amplify the issues of repeated exposure to both fabrications and biases.
-
The more these systems are adopted, the more influence they can have over human beliefs.
-
The use of AI-generated content can create a cycle of distorted human beliefs, especially when such information contradicts prior knowledge.
-
The real issue arises when these distorted beliefs become deeply ingrained and difficult to correct, both at the individual and population level.
The Need for Interdisciplinary Studies:
Given the rapidly evolving nature of AI technology, there’s a fleeting opportunity to conduct interdisciplinary studies to measure the impact of these models on human beliefs.
-
It’s crucial to understand how these models affect children’s beliefs, given their higher susceptibility to belief distortion.
-
Independent audits of these models should include assessments of fabrication and bias, as well as their perceived knowledgeability and trustworthiness.
-
These efforts should be particularly focused on marginalized populations who are disproportionately affected by these issues.
-
It’s necessary to educate everyone about the realistic capabilities of these AI models and correct existing misconceptions. This would help address the actual challenges and avoid imagined ones.
In a recent interview with Fox Business, Julia Dixon, the founder of ES.Ai, an Ai tool for college applications, emphasized the importance of students incorporating artificial intelligence into their educational journey.
She argued that students who don’t leverage AI resources will find themselves at a disadvantage, as AI in education is as inevitable as the internet or a search engine.
Dixon, a former tutor, compared the use of AI in brainstorming ideas, outlining essays, and editing students’ work to the role of a human tutor. She stressed that Ai should not replace students’ work but assist them, and it’s not cheating as long as ethical tools and practices are followed.
Dixon hopes that Ai tools like ES.Ai will increase students’ access to tutoring and educational resources.
She warned that students need to learn how to make AI “work for them” so it doesn’t become “a replacement for them.” She reiterated that students who aren’t learning how to use AI properly will be at a disadvantage.
In a related development, New York City Public Schools had initially banned the use of ChatGPT, a generative AI chatbot, in classrooms, but later reversed the decision
Here are some examples of how conversational AI is being used in healthcare today:
-
Chatbots: Chatbots can be used to answer patients’ questions, provide support, and schedule appointments.
-
Virtual assistants: Virtual assistants can be used to help patients manage their chronic conditions, track their health data, and find information about healthcare providers.
-
Decision support tools: Decision support tools can be used to help healthcare providers make more informed decisions about patient care.
YouTube is taking a leap forward in the realm of language accessibility.
The video-sharing giant has announced its collaboration with Aloud, an AI-powered dubbing service from Google’s Area 120 incubator.
The process is quite straightforward. Aloud first transcribes your video, allowing you to review and edit the transcription. Then, it translates and produces the dub.
This service is currently being tested with hundreds of creators and supports a few languages, namely English, Spanish, and Portuguese, with more on the horizon.
This initiative is a boon for creators aiming to reach a global audience. The ability to add multi-language dubs to their videos could be a game-changer. And it doesn’t stop there. YouTube is also working on making translated audio tracks sound more like the creator’s voice, complete with more expression and lip sync. These features are slated for a 2024 release.
YouTube’s move could be a significant step towards breaking language barriers and fostering global understanding.
But it is important that Ai be able to capture the nuances of human speech and emotion accurately.
AI discovers potential aging-stopper chemicals
- Scientists are using AI and machine learning to identify natural compounds that can slow down the aging process.
- A machine learning model trained on known chemicals and their effects successfully predicted compounds that could extend the life of a translucent worm with similarities to humans.
- After screening thousands of chemicals, the model identified three potential compounds with anti-aging properties: ginkgetin, periplocin, and oleandrin.
Daily AI News 6/22/2023
-
DeepMind latest paper introduces a self-improving AI agent for robotics, RoboCat, that learns to perform a variety of tasks across different arms, and then self-generates new training data to improve its technique.
-
OpenAI has lobbied for significant elements of the most comprehensive AI legislation in the world—the E.U.’s AI Act—to be watered down in ways that would reduce the regulatory burden on the company.
-
In an apparent bid to assert its presence in the rapidly expanding AI landscape, Amazon Web Services (AWS)—the retail giant’s sizable cloud computing arm—has introduced a fund of $100 million to bolster startups focusing on generative AI.
-
Over the past year, more than 100,000 login credentials to the popular artificial intelligence chatbot ChatGPT have been leaked and traded on the dark web, according to a Singaporean cybersecurity firm.
-
AWS launches generative AI program with $100M:To accelerate enterprise innovation, the new AWS Generative AI Innovation Center will connect the cloud provider’s machine learning and artificial intelligence experts with customers and partners.
Top AI tools you can use for presentations/slides in 2023
Hey all, I run an AI tools directory and thought I’d take the time to share some of my top picks for GPT-powered tools that create visual presentations/slides. Keep in mind none of these will completely replace manual work if you want something very high quality, but they do get the job done and takes out 90% of the work required. Without further ado, here’s a few that I’ve tried and liked, as well as my thoughts on them:
Plus AI for Google Slides- Great for Work; Presentations with Live Data in Snapshots
A fantastic tool for automating and enhancing my Google Slides presentations. Plus AI lets you start with a brief description of the presentation you need an an AI-generated outline is created, which you can then adjust according to your requirements. In addition, it lets you make ‘Snapshots’ from any web content which can be embedded and updated in my slides or documents with just one click. This is particularly useful for my team meetings and project reports as it significantly reduces preparation time. It’s available for free on the Google Marketplace as an add-on for GSlides.
Tome – Great for Business Storytelling
Generates a narrative based on a simple prompt, turning it into a presentation, outline, or story with both text and images. I found it very efficient for creating dynamic, responsive presentations, and appreciated how the AI could automatically cite sources or translate content into other languages. It’s an intuitive tool for anyone who needs to deliver compelling stories or presentations, from founders and executives to educators. A standout feature is the ability to embed live interactive content, such as product mockups and data, directly onto your page, bringing the storytelling experience to life. It’s available for free as a web app, with integrations for apps such as Figma, YouTube, Twitter, and GSheets.
STORYD – Business Storytelling, with Script Generator
This tool has truly revolutionized my approach to data presentations. By simply providing a brief summary of my topic, StoryD employs AI to script, design, and generate a presentation in less than a minute. Not only does this tool save me an immense amount of time, but its built-in ‘storytelling structure’ enhances the communicability and impact of my data. I also appreciate its customization options, such as themes, fonts, colors, and a plethora of layout options. The free limited beta version offers enough for the casual user, but the pro version at $18/mo adds useful features like team collaboration and real-time editing. Available as a web app.
beautiful.ai – Great for Visually Appealing Slides
A considerable time saver for anyone frequently creating presentations. Beautiful.ai provides a broad collection of smart slide templates, enabling you to build appealing and meaningful presentations swiftly. I was particularly impressed with its ability to automatically organize and design content in minutes, irrespective of your graphic design experience. It also offers slide templates for various needs, from timelines, sales funnels, SWOT analysis, to more specific ones like data & charts, visual impact slides, and so forth. The free trial is more than adequate for getting a feel of the service, and their paid plans start at $12/mo. It’s available as a web app and integrates with cloud platforms (i.e. Dropbox and Google Drive).
Albus – Knowledge Presentations/Cards/Map
Changes the way you typically interact with knowledge and facts; it harnesses the power of GPT to create an engaging and exploratory learning experience around any topic. Basically you start with with a single question and prompt, and you get a fact card, which you can then expand into other cards and images. I appreciate the way it opens up new perspectives and angles, allowing me to dive into a subject, ask questions, and organically grow my understanding. The ability to add notes and images to organize my board further enriches the experience. And when it’s time to share, I love how Albus AI facilitates controlled content presentation. With Albus AI, it’s not just about learning, but also about the journey of discovery. It’s available as a web app, and currently in Beta.
Decktopus – Great Overall for Work/Business, “Microsites”
Decktopus AI takes the pain out of crafting presentations. Simply key in a topic and it generates a fully fleshed out deck in an instant, which is a boon for my quick-turnaround needs. Its one-click design feature and auto-adjusted layouts simplify the customization process, saving me the headache of manual tweaking. I also appreciate the built-in tools such as image & icon suggestions, tailored slide notes, and extra content generation which further streamline the creation process. Its additional features, like voice recording and real-time audience feedback collection, elevate my presentations to a new level. For quick, professional-looking presentations, Decktopus AI is my go-to. It can also handle generating micro-sites (basically something that’s between a LinkTree and an landing page in terms of complexity). It’s available as a web app for free.
Gamma – Good Alternative to Decktopus
A fresh take on presentations, Gamma marries the depth of documents with the visual appeal of slides, powered by AI for efficiency. It lets me draft ideas quickly and the AI transforms them into professional-looking presentations in a snap. The interface is incredibly intuitive, allowing for nested cards for detailing and the ability to embed various forms of content, including GIFs, videos, charts, and websites. My favorite feature is the one-click restyle, removing the tedious task of manual formatting. Sharing the content is simple and works on all devices. Plus, it offers built-in analytics, which adds a nice touch to understand audience engagement.
SlidesAI – Text to Slides for Google Slides
A real game-changer for those frequently tasked with creating presentations. SlidesAI integrates seamlessly into Google Slides, transforming your raw text into professionally-styled slides in just seconds. The AI parses your input, breaking it down into digestible, summarized points, even providing automatic subtitles for each page – all in over 100 different languages. It not only eliminates the tedious task of manual slide creation but also offers a real time-saver when summarizing longer texts. While the free tier allows you to create up to 3 presentations a month, I personally found value in the Pro plan for the higher character limit and additional presentations per month. It’s available on Google Marketplace as a add-on for Slides.
MagicSlides – Idea to Slides for Google Slides
Transforms ideas into professional-looking Google Slides in seconds, eliminating the tedious work of designing and creating slides from scratch. I just input the topic and slide count, and it auto-generates a presentation for me, complete with relevant images and eye-catching layouts. Personalization options like theme selection, font choice, and color palette further enhance the final result. The app also supports over 100 languages, which is immensely helpful when I’m dealing with international projects. Like SlidesAI you get 3 free presentations per month and it’s available as an add-on for Google Slides.
Feel free to try these tools for yourself to get a feel for GPT’s potential as a visulization tool, and I hope this list will help at least some of you save some time for your next presentation at work!
ChatGPT explains (in morbid detail) what would happen to a man’s body if he was in a submarine at Titanic depth while it imploded
A submersible would be under immense pressure at the depths of the Titanic wreckage, which lies about 2.37 miles (3.81 kilometers) below the surface. At this depth, the pressure is over 370 times atmospheric pressure, or about 5,500 pounds per square inch (psi).
If the submersible were to suddenly implode, the effect on the human body inside would be catastrophic. The sudden compression of the environment around the man would almost instantaneously crush his body. This wouldn’t be a gradual process; it would happen in less than a second due to the enormous and immediate change in pressure.
The body would be subjected to rapid compression, causing immediate and severe trauma. Essentially, every part of the body that contains gas, including the lungs and the gastrointestinal tract, would be crushed or imploded. The water pressure would also force water into body cavities such as the nose, mouth, and ears. This rapid compression could cause severe internal injuries, including hemorrhage and organ damage.
Due to the speed of the implosion, it’s unlikely the individual would experience much, if any, pain. Unconsciousness would likely occur almost instantaneously due to the severe trauma and lack of oxygen.
In terms of visual details, the implosion would cause an immense shockwave in the water, creating a sudden cloud of debris consisting of the destroyed submersible and, unfortunately, the remains of the occupant. Following the implosion, the water would rapidly rush back into the void, contributing further to the turbulent scene.
It’s worth noting that these circumstances are hypothetical and based on current understanding of deep sea pressure and its effects on the human body. In reality, safety measures and design standards for submersibles aim to prevent such catastrophic failures from ever occurring.
Australian-based AI Startup Cortical Labs has trained human brain cells on a chip to play the classic video game Pong, intending to build biological computers. This groundbreaking technology aims to supersede conventional AI systems, offering more efficient operations with significantly less energy consumption. However, it also raises ethical concerns about potential sentience of these lab-grown brain cells.
Key Points:
-
The CEO of Cortical Labs, Hon Weng Chong, is innovating by merging the learning ability of human brains and the processing power of silicon chips, thereby building biological computers that he claims could revolutionize multiple applications—from testing new drugs for brain diseases to reducing the enormous energy bill for training AI.
-
By consuming less energy and outputting minimal heat, these biological computers could significantly cut down energy expenses and carbon footprint in data centers.
-
The technology, however, is attracting ethical scrutiny. The debate revolves around whether these lab-grown brain cells could become conscious and if they can experience sensations like pain and pleasure. The company has labeled its brain cells as “sentient,” meaning they are “responsive to sensory impressions.”
-
Cortical Labs is engaging with bioethicists to navigate these ethical concerns while acknowledging the significant technical challenges in this field.
Impact and Discussion:
-
By reducing the energy cost of running AI operations, this technology could revolutionize the AI sector and reduce the environmental impact of data centers.
-
On the ethical front, it might force the society to redraw boundaries on bioengineering and rethink the definition of sentience.
-
The commercialization of such technology could potentially disrupt the pharmaceutical industry by offering more accurate, ethical, and human-based testing of drugs.
It’s an exciting space with a lot of potential, but these advances also bring with them a host of ethical concerns that we as a society need to grapple with. Your thoughts?
Source: Forbes
———-
P.S. If you liked this, I’ve created a free directory of AI tools with over 1200 apps listed for almost any use case. It’s updated daily and there’s also a GPT-powered chatbot to help you find AI tools for your needs. Feel free to check it out if there’s something specific you are looking for. We also regularly post stories about how people across various fields are leveraging AI across their personal, professional, and academic lives, in addition to exclusive insights on AI tools, prompts, news, and more on our free newsletter.
How does a LLM know how to answer a question?
I’m pretty solidly on the side of “LLM’s are just regurgitating the most likely next token and have no true intelligence”. Today though I asked it to proofread some text I was writing and was wondering what it changed so I asked it what the difference was between the two texts. It was able to create a bulleted list of how and why it modified each part of my text step by step. (GPT 3.5 by the way)
I don’t see how this is possible with just a LLM with no other pre-programmed instructions. If it’s just an advanced auto-correct then how does it know how to compare two pieces of text, how does it know WHY it changed my text? I feel like it should be impossible to be able to explain its own reasoning just by parsing sentence structure in it’s training data. It would have to have some insight into it’s own logic and then know how to articulate that even though that has nothing to do with “the most likely next text”.
Answer:
To understand how a Language Learning Model (LLM) like GPT-3.5 or GPT-4 works, you first need to understand a few fundamental principles. I’ll outline them briefly:
-
Training: GPT-4, like its predecessors, is trained on a diverse range of internet text. But it doesn’t know specifics about which documents were part of its training set, nor does it have access to any source of external or real-time information after the training period.
-
Pattern recognition: The LLM works by recognizing patterns in the data it was trained on. So, when you ask it to proofread and edit text, it’s not so much that it ‘understands’ the text in the way we humans do, but rather, it identifies patterns, predicts likely corrections based on its training, and generates responses.
-
Statistical prediction: The primary process that drives these models is statistical prediction. They’re essentially predicting the ‘most likely next token’ or word based on the context they’re given.
Now, to your question about how it compares two pieces of text or explains the changes it made:
-
Text comparison: The model doesn’t ‘compare’ texts in the human sense. Instead, if you provide it with two pieces of text and ask it to identify differences, it can do so by running through the two texts and noting down where they diverge. It’s a pattern recognition task, much like how a diff tool works in programming.
-
Explaining changes: As for explaining why it made certain changes, again, the model doesn’t truly ‘understand’ the text. But it can generate plausible-sounding explanations based on the patterns it’s seen in its training data. For instance, if it changes ‘their’ to ‘they’re’, it could say it did so because ‘they’re’ is grammatically correct in the given context – not because it ‘knows’ grammar in the way humans do, but because it’s seen similar patterns of correction in the data it was trained on.
Now, the fascinating point here is that even though the model doesn’t possess ‘true intelligence’ or consciousness, it can still generate highly useful and surprisingly sophisticated outputs. This is a testament to the power of pattern recognition and statistical prediction. However, it’s essential to remember that while the outputs can be complex and thoughtful, the underlying process is based on the model’s training, without any real comprehension or awareness.
Does this clarify things a bit?
ETA, It is funny to me that LLM so often gets turned into “Language Learning” instead of “Large Language” Model by the LLM…
Google’s DeepMind has developed a self-improving robotic agent, RoboCat, that can learn new tasks without human oversight. This technological advancement represents substantial progress towards creating versatile robots for everyday tasks.
Introducing RoboCat: DeepMind’s newly developed robot, named RoboCat, is a groundbreaking step in artificial intelligence (AI) and robotics. This robot is capable of teaching itself new tasks without human supervision.
-
RoboCat is termed as a “self-improving robotic agent.”
-
It can learn and solve various problems using different real-world robots like robotic arms.
How RoboCat Works: RoboCat learns by using data from its actions, which subsequently improves its techniques. This advancement can then be transferred to other robotic systems.
-
DeepMind claims RoboCat is the first of its kind in the world.
-
The London-based company, acquired by Google in 2014, says this innovation marks significant progress towards building versatile robots.
Learning Process of RoboCat: RoboCat learns much faster than other state-of-the-art models, picking up new tasks with as few as 100 demonstrations because it uses a large and diverse dataset.
-
It can help accelerate robotics research, reducing the need for human-supervised training.
-
The capability to learn so quickly is a crucial step towards creating a general-purpose robot.
Inspiration and Training: RoboCat’s design was inspired by another of DeepMind’s AI models, Gato. It was trained using demonstrations of a human-controlled robot arm performing various tasks.
-
Researchers showed RoboCat how to complete tasks, such as fitting shapes through holes and picking up pieces of fruit.
-
After these demonstrations, RoboCat trained itself, improving its performance after an average of 10,000 unsupervised repetitions.
Capability and Potential of RoboCat: During DeepMind’s experiments, RoboCat taught itself to perform 253 tasks across four different types of robots. It could adapt its self-improvement training to transition from a two-fingered to a three-fingered robot arm.
-
RoboCat is part of a virtuous training cycle, getting better at learning additional new tasks the more it learns.
-
Future development could see the AI learn previously unseen tasks.
-
This self-teaching robotic system is part of a growing trend that could lead to domestic robots.
PS: The author runs a ML-powered news aggregator that summarizes with an AI the best tech news from 50+ media (TheVerge, TechCrunch…). If you liked this analysis, you’ll love the content you’ll receive from this tool!
Google new AI algorithm predicts heart disease from a simple eye scan.
Goodbye to CT Scans, MRIs, Xrays. Presented by Google’s CEO Sundar Pichai.
Scientists from Google and its health-tech subsidiary Verily have discovered a new way to assess a person’s risk of heart disease using machine learning.
-By analyzing scans of the back of a patient’s eye, the company’s software is able to accurately deduce data, including an individual’s age, blood pressure, and whether or not they smoke.
This can then be used to predict their risk of suffering a major cardiac event — such as a heart attack — with roughly the same accuracy as current leading methods.
The algorithm potentially makes it quicker and easier for doctors to analyze a patient’s cardiovascular risk, as it doesn’t require a blood test.
To train the algorithm, Google and Verily’s scientists used machine learning to analyze a medical dataset of nearly 300,000 patients. This information included eye scans as well as general medical data. As with all deep learning analysis, neural networks were then used to mine this information for patterns, learning to associate telltale signs in the eye scans with the metrics needed to predict cardiovascular risk
#ai #data #machinelearning #deeplearning #neuralnetworks #learning #health #tech #event #software #google #medical #doctors #artificialintelligence #algorithms #algorithm #ml #iot
Latest AI trends in June 2023: June 22nd 2023
20+ Best AI Tools For Startups (2023)
AdCreative.ai
Boost your advertising and social media game with AdCreative.ai – the ultimate Artificial Intelligence solution. Say goodbye to hours of creative work and hello to the high-converting ad and social media posts generated in mere seconds. Maximize your success and minimize your effort with AdCreative.ai today.
DALL·E 2
OpenAI’s DALLE 2 is a cutting-edge AI art generator that creates unique and creative visuals from a single text input. Its AI model was trained on a huge dataset of images and textual descriptions to produce detailed and visually attractive images in response to written requests. Startups can use DALLE 2 to create images in advertisements and on their websites and social media pages. Businesses can save time and money by not manually sourcing or creating graphics from the start, thanks to this method of generating different images from text.
Otter AI
Using artificial intelligence, Otter.AI empowers users with real-time transcriptions of meeting notes that are shareable, searchable, accessible, and secure. Get a meeting assistant that records audio, writes notes, automatically captures slides, and generates summaries.
Notion
Notion is aiming to increase its user base through the utilization of its advanced AI technology. Their latest feature, Notion AI, is a robust generative AI tool that assists users with tasks like note summarization, identifying action items in meetings, and creating and modifying text. Notion AI streamlines workflows by automating tedious tasks, providing suggestions, and templates to users, ultimately simplifying and improving the user experience.
Motion
Motion is a clever tool that uses AI to create daily schedules that account for your meetings, tasks, and projects. Say goodbye to the hassle of planning and hello to a more productive life.
Jasper
With its outstanding content production features, Jasper, an advanced AI content generator, is making waves in the creative industry. Jasper, considered the best in its area, aids new businesses in producing high-quality content across multiple media with minimal time and effort investment. The tool’s efficiency stems from recognizing human writing patterns, which facilitates groups’ rapid production of interesting content. To stay ahead of the curve, entrepreneurs may use Jasper as an AI-powered companion to help them write better copy for landing pages and product descriptions and more intriguing and engaging social media posts.
Lavender
Lavender, a real-time AI Email Coach, is widely regarded as a game-changer in the sales industry, helping thousands of SDRs, AEs, and managers improve their email response rates and productivity. Competitive sales environments make effective communication skills crucial to success. Startups may capitalize on the competition by using Lavender to boost their email response rate and forge deeper relationships with prospective customers.
Speak AI
Speak is a speech-to-text software driven by artificial intelligence that makes it simple for academics and marketers to transform linguistic data into useful insights without custom programming. Startups can acquire an edge and strengthen customer relationships by transcribing user interviews, sales conversations, and product reviews. In addition, they can examine rivals’ material to spot trends in keywords and topics and use this information to their advantage. In addition, marketing groups can utilize speech-to-text transcription to make videos and audio recordings more accessible and generate written material that is search engine optimization (SEO) friendly and can be used in various contexts.
GitHub Copilot
Recently, GitHub released an AI tool called GitHub Copilot, which can translate natural language questions into code recommendations in dozens of languages. This artificial intelligence (AI) tool was trained on billions of lines of code using OpenAI Codex to detect patterns in the code and make real-time, in-editor suggestions of code that implement full functionalities. A startup’s code quality, issue fixes, and feature deliveries can all benefit greatly from using GitHub Copilot. Moreover, GitHub Copilot enables developers to be more productive and efficient by handling the mundane aspects of coding so that they can concentrate on the bigger picture.
Olivia by Paradox
For faster hiring across all industries and geographies, businesses can turn to Olivia, a conversational recruiting tool developed by Paradox. This AI-powered conversational interface may be used for candidate screening, FAQs, interview scheduling, and new hire onboarding. With Olivia, entrepreneurs may locate qualified people for even the most technical positions and reclaim the hours spent on administrative activities.
Lumen5
Lumen5 is a marketing team-focused video production platform that allows for developing high-quality videos with zero technical requirements. Lumen5 uses Machine Learning to automate video editing, allowing users to quickly and easily produce high-quality videos. Startups can quickly and easily create high-quality films for social media, advertising, and thought leadership with the help of the platform’s built-in media library, which provides access to millions of stock footage, photographs, and music tracks. In addition, AI can help firms swiftly convert blog entries to videos or Zoom recordings into interesting snippets for other marketing channels.
Spellbook by Rally
Spellbook is an artificial intelligence (AI) tool that leverages OpenAI’s GPT-3 to review and recommend language for your contracts without you having to leave the comfort of a Word document. It was trained on billions of lines of legal text. This artificial intelligence tool can be used by startups in drafting and reviewing agreements and external contracts to identify aggressive words, list missing clauses and definitions, and red flag flags. Spellbook can also generate new clauses and recommend common topics of negotiation based on the agreement’s context.
Grammarly
Grammarly is an AI-powered writing app that flags and corrects grammar errors as you type. A machine learning algorithm trained on a massive dataset of documents containing known faults drives the system. Enter your content (or copy and paste it) into Grammarly, and the program will check it for mistakes. Furthermore, the program “reads” the mood of your work and makes suggestions accordingly. You can choose to consider the recommendations or not. As an AI tool, Grammarly automates a process that previously required human intervention (in this case, proofreading). Use an AI writing checker like Grammarly, and you’ll save yourself a ton of time.
ChatBot
Chatbots are one of the most well-known uses of artificial intelligence. Computer programs called “chatbots” attempt to pass as humans in online conversations. They process user input using NLP algorithms that enable them to respond appropriately. From assisting customers to promoting products, chatbots have many potential applications. Chatbots on websites and mobile apps have increased in recent years to provide constant help to customers. Whether answering basic questions or solving complex problems, chatbots are up to the challenge. In addition, businesses can use them to make suggestions to customers, such as offering related items or services.
Zendesk
Keeping track of customer support inquiries can take time and effort, especially for smaller organizations. Zendesk is an artificial intelligence (AI)-powered platform for managing customer assistance. Zendesk goes above and beyond the capabilities of chatbots by discovering trends and patterns in customer service inquiries. Useful metrics are automatically gathered, such as typical response times and most often encountered issues. It also finds the most popular articles in your knowledge base so you can prioritize linking to them. An intuitive dashboard displays all this information for a bird’s-eye view of your customer service.
Timely
Timely is an AI-powered calendar app that will revolutionize how you schedule your day. It integrates with your regular software to make tracking time easier for your business. Track your team’s efficiency, identify time-consuming tasks, and understand how your company spends its resources. Timely is a fantastic tool for increasing the effectiveness and efficiency of your team. You can see how your staff spends their time in real-time and adjust workflows accordingly.
AIReflex
If you own an online store, you understand the ongoing threat of fraud. Companies lose billions of dollars annually to credit card fraud, which can also hurt your reputation. Through the analysis of client behavior patterns, fraud can be prevented with the help of AI. Machine learning algorithms are used by businesses like aiReflex to sift through client data in search of signs of fraud. It would be impractical and time-consuming to inspect every transaction manually. However, this can be automated with the help of AI, which will keep an eye on all of your financial dealings and flag anything that looks fishy. Your company will be safe from fraudulent activity if you take this precaution.
Murf AI
Murf is an artificial intelligence–powered text-to-speech tool. It has a wide range of applications, from speech generation for corporate training to use in audiobook and podcast production. It is a highly flexible tool that may also be used for voiceovers in promotional videos or infomercials. Murf is a wonderful option if you need to generate a speech but don’t have the funds to hire a professional voice actor. Choosing a realistic-sounding voice from their more than 120 options in 20 languages is easy. Their studio is easy to use, and you may incorporate audio, video, and still photographs into your production. As a bonus, you have complete command over the rate, pitch, and intonation of your recording, allowing you to mimic the performance of a trained voice actor.
ChatGPT
OpenAI’s ChatGPT is a massive language model built on the GPT-3.5 framework. It can produce logical and appropriate answers to various inquiries because it has been trained on large text data. Because ChatGPT can automate customer care and support, it has helped startups provide 24/7 help without hiring a huge customer service department. For instance, the Indian food delivery firm Swiggy has used ChatGPT to enhance customer service and shorten response times, resulting in happier and more loyal customers.
BARD by Google
Google’s Bard uses the Language Model for Dialogue Applications (LaMDA) as an artificially intelligent chatbot and content-generating tool. Its sophisticated communication abilities have been of great use to new businesses. New companies have used Bard to improve their software development, content creation, and customer service. For example, virtual assistant startup Robin AI has implemented Bard to boost customer service and answer quality. Startups can now provide more tailored and interesting user experiences because of Bard’s intelligent and context-aware dialogue production, increasing customer satisfaction and revenue.
BEAUTIFUL.AI
Small business owners and founders often need persuasive presentations to win over investors and new clientele. Create great presentations without spending hours in PowerPoint or Slides by using Beautiful.ai. The software will automatically generate engaging slides from the data you provide, like text and graphics. Over 60 editable slide templates and multiple presentation layouts are available on Beautiful.ai. Try it out and see if it helps you make a better impression.
DUMME
If you want to reach millennials and other young people with short attention spans, you need to have a presence on TikTok and Instagram. Dumme is a useful tool for extracting key moments from longer videos and podcasts to make shorts (short videos to share on social media). You may use Dumme to pick the best moments from any video or audio you post to use them in short. It will automatically create a short video with a title, description, and captions suitable for sharing online. Making a short video for sharing on social media can be done without spending hours in front of a computer.
Cohere Generate
The Open AI-backed firm Cohere Generate created the language AI platform. It helps organizations and startups save time and effort in creating large-scale, personalized text content. It employs NLP and machine learning algorithms to develop content that fits with the brand’s voice and tone. Use this tool to boost your startup’s online visibility, expand your reach, and strengthen your content marketing strategy.
Synthesia
Synthesia is a cutting-edge video synthesis platform that has been a huge boon to the video production efforts of new businesses. It uses artificial intelligence to eliminate the need for costly and time-consuming video shoots by fusing a human performer’s facial emotions and lip movements with the audio. To improve their advertising campaigns, product presentations, and customer onboarding procedures, startups may use Synthesia to create tailored video content at scale. For instance, entrepreneurs can produce multilingual, locally adapted videos or dynamic video ads with little to no more work. Synthesia gives young companies the tools to reach more people at a lower cost per unit while still delivering high-quality content.
Google has developed an AI-based service to combat money laundering. It has been trialed by HSBC to detect suspicious financial transactions. The aim is to mitigate one of the most challenging and costly issues in the financial sector: money laundering.
-
Money laundering is linked to criminal activities like drug trafficking, human trafficking, and terrorist financing.
-
This issue requires substantial resources and cross-state collaboration to track illicit funds.
-
Google’s AI-powered Anti Money Laundering (AML AI) service can analyze billions of records to spot trends and signs of financial crime.
Google’s AI Approach: The conventional methods of monitoring involve manually defined rules, which often lead to high alert rates but low accuracy. Google’s AI tool provides a more efficient solution. Google Cloud’s new AI-driven tool, Anti Money Laundering AI, eliminates rules-based inputs, reducing false positives and increasing efficiency in identifying potential financial risks.
-
Current monitoring products depend on manual rules, resulting in many false alerts and limited accuracy.
-
Human-defined rules are also easy for criminals to understand and circumvent.
-
The AI tool minimizes false positives, saving time, and enabling focus on truly suspicious activities.
Risk Score for Money Laundering: The AML AI tool creates a consolidated risk score, which is a more efficient alternative to the conventional rule-based alert system.
-
Instead of triggering alerts based on pre-set conditions, the AI tool monitors trends and behaviors.
-
The risk score is calculated based on bank data, including patterns, network behavior, and customer information.
-
This approach allows the tool to adapt quickly to changes and focus on high-risk customers.
Performance of the AI Tool: HSBC, as a test customer, found that the AI tool outperforms existing systems in detecting financial crime risk.
-
HSBC reported a 2-4 times increase in accurate risk detection and a 60% decrease in alert volumes.
-
This has helped reduce operating costs and expedite detection processes.
-
Google Cloud’s AML AI has enhanced HSBC’s anti-money laundering detection capabilities.
Researchers from Yamagata University and IBM Japan have used a deep learning artificial intelligence model to discover four new geoglyphs in the Nazca desert of Peru, dating back to between 500 BC and 500 AD. The AI system accelerated the identification of these geoglyphs, making the process 21 times faster than human analysis alone.
AI Discovery of Geoglyphs: The team from Yamagata University and IBM Japan used a deep learning AI model to find new geoglyphs in Peru.
-
Geoglyphs are earth carvings that form shapes and lines, some of which can reach up to 1,200 feet long.
-
Four new geoglyphs were identified, depicting a humanoid figure, a fish, a bird, and a pair of legs.
Academic Debate: There’s disagreement among scholars about why these geoglyphs were created.
-
Some believe they were made to honor deities thought to observe from above.
-
Others suggest extraterrestrial involvement, with the lines serving as airfields for alien spacecraft.
Use of AI in Archaeological Research: Previously, finding new geoglyphs required researchers to manually examine aerial photographs, a time-consuming and challenging task.
-
The scientists trained a deep learning system to identify potential Nazca Lines based on previously found geoglyphs.
-
The AI system significantly sped up the process, working 21 times faster than human analysis alone.
Future of AI in Archaeology: Following this success, the Yamagata researchers plan to team up with the IBM T. J. Watson Research Center to extend their research to the entire region where the lines were discovered.
-
There are also plans to work with Peru’s Ministry of Culture to protect the newly discovered geoglyphs.
-
The team predicts that recent technological advances in drones, robotics, LiDAR, Big Data, and artificial intelligence will propel the next wave of archaeological discoveries.
Previous AI Contributions to Archaeology: Artificial intelligence has been previously used to solve other archaeological mysteries.
-
AI systems have helped identify patterns on land using satellite and sonar images, leading to discoveries like a Mesopotamian burial site and shipwrecks.
-
AI has also aided in translating ancient texts, as researchers at the University of Chicago trained a system to translate ancient inscriptions with an 80% accuracy.
- Researchers from Yamagata University and IBM Japan used a deep learning AI model to discover four new geoglyphs in the Nazca desert in Peru, which are ancient large-scale artworks etched into the earth.
- The AI system analyzed aerial photos of the area, working 21 times faster than human analysis, identifying potential Nazca Lines based on previously found geoglyphs.
- Following the success of AI integration in archeological research, the team plans to expand their research to the entire region, highlighting that AI technology may drive the future of archeological discoveries.
AI like ChatGPT, once known for providing detailed instructions on dangerous activities, are being reevaluated after a study showed these systems could potentially be manipulated into suggesting harmful biological weaponry methods.
Concerns About AI Providing Dangerous Information: The initial concerns stem from a study at MIT. Here, groups of undergraduates with no biology background were able to get AI systems to suggest methods for creating biological weapons. The chatbots suggested potential pandemic pathogens, their creation methods, and even where to order DNA for such a process. While constructing such weapons requires significant skill and knowledge, the easy accessibility of this information is concerning.
-
The AI systems were initially created to provide information and detailed supportive coaching.
-
However, there are potential dangers when these AI systems provide guidance on harmful activities.
-
This issue brings up the question of whether ‘security through obscurity’ is a sustainable method for preventing atrocities in a future where information access is becoming easier.
Controlling Information in an AI World: Addressing this problem can be approached from two angles. Firstly, it should be more difficult for AI systems to give detailed instructions on building bioweapons. Secondly, the security flaws that AI systems inadvertently revealed, such as certain DNA synthesis companies not screening orders, should be addressed.
-
All DNA synthesis companies could be required to conduct screenings in all cases.
-
Potentially harmful papers could be removed from the training data for AI systems.
-
More caution could be exercised when publishing papers with recipes for building deadly viruses.
-
These measures could help control the amount of harmful information AI systems can access and distribute.
Positive Developments in Biotech: Positive actors in the biotech world are beginning to take these threats seriously. One leading synthetic biology company, Ginkgo Bioworks, has partnered with US intelligence agencies to develop software that can detect engineered DNA on a large scale. This indicates how cutting-edge technology can be used to counter the potentially harmful effects of such technology.
-
The software will provide investigators with the means to identify an artificially generated germ.
-
Such alliances demonstrate how technology can be used to mitigate the risks associated with it.
Managing Risks from AI and Biotech: Both AI and biotech have the potential to be beneficial for the world. Managing the risks associated with one can also help manage risks from the other. Therefore, ensuring the difficulty in synthesizing deadly plagues protects against certain forms of AI catastrophes.
-
The important point is to stay proactive and prevent detailed instructions for bioterror from becoming accessible online.
-
Preventing the creation of biological weapons should be difficult enough to deter anyone, whether aided by AI systems like ChatGPT or not.
GPT-3 was given an IQ test and found to earn a score of 112. More recently, as reported by Scientific American, GPT-4 scored 155 on the test. This score is five points below what Einstein scored and five points above the IQ that the average Nobel laureate scores. In a few years LLMs will probably score over 200 on these tests, and once AGIs begin to create ASIs one can easily imagine them eventually scoring a thousand or more on these tests, meaning that we will probably have to devise new ones for this scale of measurement. This is just a small example of how quickly AI is developing and of how much promise it holds for our world’s future. Just imagine the kinds of problems that these ASIs will soon be able to solve that lie way, way outside of our current human ability.
Much of our advancement in the world has had to do with the application of intelligence to ethical behavior. Government, education and medicine are good examples of such advancement. Generally speaking greater intelligence translates to a better understanding of right and wrong. For decades we have had far more than enough resources to create a wonderful world for every person on the planet but we have lacked the ethical will to get this work done. The promise of AI is that very soon we will probably have more than enough ethical intelligence to finally get this done. We are welcoming a wonderfully intelligent and virtuous new world
https://www.scientificamerican.com/article/i-gave-chatgpt-an-iq-test-heres-what-i-discovered/
Artificial intelligence (AI) has made remarkable strides in recent years, particularly in the realm of computer vision. One fascinating application of AI is the generation of realistic human faces. This cutting-edge technology has the potential to revolutionize various industries, from entertainment and gaming to personalized avatars and even law enforcement. In this article, we delve into the intricacies of AI-driven face generation, exploring the methods used, the challenges faced, and the ethical considerations surrounding this emerging field.
Understanding Generative Adversarial Networks (GANs):
At the heart of AI-powered face generation lies a sophisticated technique called Generative Adversarial Networks (GANs). GANs consist of two components: a generator and a discriminator. The generator’s role is to create synthetic images, while the discriminator’s task is to distinguish between real and generated images. Through an iterative process, the generator becomes increasingly proficient at producing images that deceive the discriminator. Over time, GANs have demonstrated exceptional proficiency in generating human faces that are virtually indistinguishable from real ones.
Training Data and Network Architecture:
To create realistic human faces, AI models require a vast amount of training data. Researchers typically employ datasets containing tens of thousands of labeled images of faces. These datasets encompass diverse ethnicities, ages, and gender, enabling the AI models to capture the wide spectrum of human facial features and variations.
Deep convolutional neural networks (CNNs) serve as the backbone of AI face generation. CNNs excel at analyzing visual data by extracting intricate patterns and features. The generator network consists of multiple convolutional and deconvolutional layers that gradually refine the generated images. The discriminator network, on the other hand, uses similar CNN architecture to evaluate and classify the authenticity of the generated faces.
Progressive Growing and Style Transfer:
One notable advancement in face generation is the concept of progressive growing. Initially proposed by researchers at NVIDIA, this technique involves training GANs on low-resolution images before gradually increasing the image size. Progressive growing allows for the generation of highly detailed and realistic faces.
Another technique that enhances the quality and diversity of generated faces is style transfer. By leveraging the latent space of the trained generator network, it becomes possible to manipulate specific features of the generated faces. This allows for the synthesis of faces with desired attributes, such as adjusting age, gender, or even combining features from different individuals.
Ethical Considerations and Mitigating Bias:
While AI-generated faces hold immense potential, ethical considerations must be at the forefront of their development and deployment. One crucial concern revolves around data privacy and consent. As AI models rely on vast datasets, ensuring that individuals’ images are used with proper consent and safeguards is of utmost importance.
Moreover, there is a risk of perpetuating biases present in the training data. If the training dataset is not diverse or contains inherent biases, the generated faces may exhibit similar biases. Recognizing and mitigating these biases through careful curation of training data and algorithmic techniques is crucial to prevent discriminatory outcomes.
Applications and Future Prospects:
The applications of AI-generated human faces are vast and varied. In the entertainment industry, this technology can revolutionize character creation in movies, video games, and virtual reality experiences. It also has potential applications in facial reconstruction for historical figures and forensic facial reconstruction in criminal investigations.
Looking ahead, advancements in AI face generation could lead to breakthroughs in areas such as personalized avatars, virtual communication, and improved human-computer interactions. However, it is essential to continue research and development while maintaining ethical standards to ensure the responsible and equitable use of this technology.
Resource:
StyleGAN by Nvidia (Open Source) – GitHub » StyleGAN on GitHUB
Impressive StyleGAN Example Online » Face Generator Free
StyleGAN on Wikipedia » Wikipedia StyleGAN
Can you distinguish a real face from an artificial one? » Game with GAN Technology
Could an AI create a new religion that reinterprets current dogma and unifies humanity? Imagine an AI claiming it has established a communication link to the spiritual entity in charge of the universe, and determined that “This is what she meant to say.”
I wonder if Google is already working on that?
A few hours ago, Singapore’s global cybersecurity leader, Group-IB, identified 101,134 compromised ChatGPT accounts.
If you want the latest AI news as it drops, look here first. All of the information has been extracted here as well for your convenience.
More specifically, the credentials of over 100,000 accounts were leaked on the dark web. These compromised credentials were found within the logs of info-stealing malware sold on illicit dark web marketplaces over the past year.
The number of available logs containing compromised ChatGPT accounts reached a peak of 26,802 in May 2023, with the Asia-Pacific region experiencing the highest concentration of stolen ChatGPT credentials.
Info stealers are a type of malware that collects credentials, bank card details, and more from browsers installed on infected computers, before sending this data to the malware operator. They have emerged as a significant source of compromised personal data.
Group IB has identified the perpetrator as “Raccoon” an infamous info stealer.
What’s most interesting is that 2FA is currently paused in ChatGPT as of June 12th
So there is no way to enable extra security as of now but changing your password may be a good idea.
Full article: (link)
The idea of AI replacing hiring managers has been a topic of hot discussion. While AI can certainly play a significant role in streamlining and improving the hiring process, completely replacing hiring managers is unlikely and comes with several challenges. Here are a few points to consider:
1. Human Interaction: Hiring involves complex decision-making that goes beyond analyzing resumes and qualifications. Hiring managers often assess candidates’ soft skills, cultural fit, and potential through interviews and interactions. Human judgment and intuition are crucial in making these assessments.
2. Bias and Fairness: AI systems are only as good as the data they are trained on. If the training data is biased, the AI system may perpetuate biases in the hiring process. Hiring managers can bring awareness to bias and ensure fair evaluation of candidates.
3. Contextual Understanding: Hiring managers possess the ability to understand the specific needs and goals of the organization. They can align hiring decisions with the company’s culture, strategic direction, and long-term vision, which may be challenging for AI systems without contextual knowledge.
4. Adaptability and Flexibility: Hiring managers can adapt their approach based on the unique requirements of each role and the changing needs of the organization. They can pivot the hiring strategy, refine job descriptions, and prioritize qualities that align with evolving business objectives.
5. Candidate Experience: AI can streamline initial resume screening and automate certain aspects of the hiring process. However, the human touch and personalized communication from hiring managers contribute to a positive candidate experience, fostering engagement and a sense of connection with the company.
I use google docs all the time for school this is super exciting
If you want the latest AI news as it drops, look here first. All of the information has been extrapolated here as well for your convenience.
Essay writing just became 100x easier. You can try the AI out with these 4 steps:
-
Join Google Labs: To start, you need to join Google Labs. Click on this link and then select the second blue button that reads “Google Workspace”. And join the waitlist (It’s instant acceptance)
-
Navigate to Google Docs: Once you’re in Google Docs, look for the magic wand tool. (Look at the video to find the magic wand.) This is where the real magic begins. Describe the content you’re looking to generate in a few words, and Google will do the rest. The best part is that you can lengthen it, shorten it, and even change the tone to best fit your needs.
-
It’s in your hands: Now that you have your workspace set up, you can start generating any kind of content you want. It can be anything: a paper, an essay, a definition, the possibilities are endless.
-
Change Existing Text: One of the coolest features of Google Labs is its ability to edit existing text. Just select the text you’ve already written, and you can change it with one click or describe how you want to change it. For instance, you could instruct Google to “rewrite it with a formal tone.”
That’s it! Hope this was helpful.
Paper link: https://arxiv.org/abs/2306.10763
Discussion on twitter: https://twitter.com/LakshyAAAgrawal/status/1671498941009997828
Abstract: Language models of code (LMs) work well when the surrounding code in the vicinity of generation provides sufficient context. This is not true when it becomes necessary to use types or functionality defined in another module or library, especially those not seen during training. LMs suffer from limited awareness of such global context and end up hallucinating, e.g., using types defined in other files incorrectly. Recent work tries to overcome this issue by retrieving global information to augment the local context. However, this bloats the prompt or requires architecture modifications and additional training. Integrated development environments (IDEs) assist developers by bringing the global context at their fingertips using static analysis. We extend this assistance, enjoyed by developers, to the LMs. We propose a notion of monitors that use static analysis in the background to guide the decoding. Unlike a priori retrieval, static analysis is invoked iteratively during the entire decoding process, providing the most relevant suggestions on demand. We demonstrate the usefulness of our proposal by monitoring for type-consistent use of identifiers whenever an LM generates code for object dereference. To evaluate our approach, we curate PragmaticCode, a dataset of open-source projects with their development environments. On models of varying parameter scale, we show that monitor-guided decoding consistently improves the ability of an LM to not only generate identifiers that match the ground truth but also improves compilation rates and agreement with ground truth. We find that LMs with fewer parameters, when guided with our monitor, can outperform larger LMs. With monitor-guided decoding, SantaCoder-1.1B achieves better compilation rate and next-identifier match than the much larger text-davinci-003 model.
ResearchAndMarkets.com has released a new report analyzing the global AI market and making predictions for 2023.
Here are the Six Key Emerging AI Market Trends
-
Democratization of AI reduces enterprise workloads and helps jump-start machine learning projects.
-
Multimodal AI plays an important role in unlocking data’s potential.
-
Increased investment in generative AI is unleashing creative industry applications.
-
Conversational AI is emerging as a highly deployed AI technology.
-
Vendors are building edge-to-cloud integration platforms and service offerings to support data orchestration.
-
Ethical AI principles are emerging as a core aspect of implementing AI technologies.
What happened in AI this week?
1. EU approves world’s first laws regulating AI, challenges tech giants’ power
The landmark E.U. AI Act looks to protect consumers from dangerous AI applications, such as forcing tech companies to label AI-generated content.
Source: https://theconversation.com/eu-approves-draft-law-to-regulate-ai-heres-how-it-will-work-205672
2. New OpenAI updates for GPT 3.5 and 4
Updates include new function calling abilities for devs, new GPT 3.5 Turbo and GPT 4 model enhancements, and cost cuts to the current pricing. The updates aim at improving workability for developers.
Source: https://techcrunch.com/2023/06/13/openai-intros-new-generative-text-features-while-reducing-pricing/
3. UN chief backs proposal to monitor AI
During a policy implementation regarding disinformation, UN Secretary-General Antonio Guterres expressed concerns about generative AI and voiced support for a policy that creates an international AI watchdog.
Source: https://www.voanews.com/a/un-chief-considering-watchdog-agency-for-ai-/7133567.html
4. Google launches new AI-powered travel & product search feature
As part of its new Search Generative Experience (SGE), you will now get new travel and product recommendations through AI. With informative content such as “things to keep in mind when using a product.”
Source: https://techcrunch.com/2023/06/14/google-intros-new-ai-powered-travel-and-product-search-features/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAG1Vgf9u8Yoi7OGOhvYsdhd3OUU9YvKftJRcgBg8jRTj7IctWKCEezHPa3j4Ky9nr3TzGy9Dvs6EYSRmDm6JBoh1Gp3TAaQPrLhTfBACemp2bdJUXbRbviBdubFlw0bS1hrg_1H4L5ABfGBeZQS3kqvNgXQ5WIAtDxgzFBRGUYZI
5. Google makes its Generative AI platform available to everyone
As part of Google Cloud, the company has made its Machine Learning Platform as a Service (ML PaaS) available to everyone. You can now use the Word Completion Model, Model Garden, and more.
Source: https://siliconangle.com/2023/06/07/google-cloud-makes-generative-ai-support-generally-available-vertex-ai/
6. Amazon adds new AI features to summarize product reviews
Amazon is now using generative AI to inform customers of what previous customers liked and disliked about the product, saving time in going through multiple reviews.
Source: https://www.cnbc.com/2023/06/12/amazon-is-using-generative-ai-to-summarize-product-reviews.html
7. Adobe announces AI Generative Recolor feature for Adobe Illustrator
Creatives can now use AI to change color, themes, and fonts of graphics using AI prompts. The features come in a broader push towards AI that Adobe has taken over the past few months.
Source: https://www.adobe.com/products/illustrator/generative-recolor.html
8. Adobe announces Firefly for enterprises to create custom generative AI models
Companies can now train Abode’s AI model around their branded assets, making creating designs around their brand theme and style easier.
Source: https://www.theverge.com/2023/6/8/23753564/adobe-firefly-enterprise-generative-ai-express-commercial
9. Zuck calls Meta’s new AI ‘too dangerous’ to make public
Meta has developed a highly versatile AI for speech generation called Voicebox, but will not release it to the public due to concerns about potential misuse of the technology.
Source: https://www.dailymail.co.uk/sciencetech/article-12215889/Meta-announces-new-AI-replicate-human-voices-wont-release-risks-misuse.html
10. Windows 12 will be full of AI features
We already have some AI support for the camera and microphone, but in the next version, it will be much better.
Also, Windows 12 should be able to make a lot better use of NPUs, or neural processing units, which are essentially processors that specialize in AI functionalities.
AI will be included in search, analysis, identification, and other features.
11. Marvel used AI to create the intro for Secret Invasion
The series is about shape-shifters that try to imitate humans, which is the exact phrase that can be used to describe AI. How convenient 🙂 You can check out the footage from the source.
- The Marvel series ‘Secret Invasion’ uses generative AI for a specific sequence in the opening credits, as confirmed by the director Ali Selim.
- Generative AI uses millions of images created by artists and photographers to train it, raising issues of using these pieces without artists’ permission or compensation, and potentially replacing actual artists with AI.
- Despite the visual appeal of AI art, there is controversy when it’s used in such high-profile projects without full understanding of its creation process and potential implications for artists.
12. AI can now predict pop music hits better than humans
AI can now predict pop music hits better than humans, according to researchers from the US. Scientists have utilized artificial intelligence to identify hit pop songs with an impressive 97% accuracy. This technology could render TV talent show judges obsolete and significantly reduce the costs of music production overall
Latest AI trends in June 2023: June 21st 2023
Understanding Evaluation Metrics for Machine Learning Models with ChatGPT
What Is Reinforcement Learning?
Artificial intelligence (AI) programs constantly use machine learning to improve speed and efficiency. In reinforcement learning, AI is rewarded for desired actions and punished for undesired actions.
Reinforcement learning can only take place in a controlled environment. The programmer assigns positive and negative values (or “points”) to certain behaviors, and the AI can freely explore the environment to seek rewards and avoid punishments.
Ideally, the AI will delay short-term gains in favor of long-term gains, so if it chooses between earning one point in one minute or earning 10 points in two minutes, it will delay gratification and go for the higher value. At the same, it will learn to avoid punitive actions that cause it to lose points.
Examples of Reinforcement Learning
Real-world applications of AI based on reinforcement learning are somewhat limited, but the method has shown promise in laboratory experiments.
For example, reinforcement learning has trained AI to play video games. The AI learns how to achieve the game’s goals through trial and error. For example, in a game like Super Mario Bros., the AI will determine the best way to reach the end of each level while avoiding enemies and obstacles. Dozens of AI programs have successfully beaten specific games, and the MuZero program has even mastered video games that it wasn’t originally designed to play.
Reinforcement learning has been used to train enterprise resource management (ERM) software to allocate business resources to achieve the best long-term outcomes. Reinforcement learning algorithms have even been used to train robots to walk and perform other physical tasks. Reinforcement learning has also shown promise in statistics, simulation, engineering, manufacturing, and medical research.
Limitations of Reinforcement Learning
The major limitation of reinforcement learning algorithms is their reliance on a closed environment. For example, a robot could use reinforcement learning to navigate a room where everything is stationary. However, reinforcement learning wouldn’t help navigate a hallway full of moving people because the environment is constantly changing. The robot would just aimlessly bump into things without developing a clear picture of its surroundings.
Since this learning relies on trial and error, it can consume more time and resources. On the plus side, reinforcement learning doesn’t require much human supervision.
Due to its limitations, reinforcement learning is often combined with other types of machine learning. Self-driving vehicles, for example, use reinforcement learning algorithms in conjunction with other machine learning techniques, such as supervised learning, to navigate the roads without crashing.
President Biden emphasizes the importance of ensuring safety in AI before its deployment, urging for bipartisan privacy legislation and new safeguards for this emerging technology. He calls for stricter limits on personal data collection, bans on targeted advertising to children, and the requirement for companies to prioritize health and safety.
Biden’s Stance on AI Safety and Privacy: Biden advocates for pre-release safety assessments of AI systems. He stresses the risks that unsafeguarded technology can pose to society, economy, and national security.
-
He emphasizes managing these risks to seize the opportunities AI offers.
-
He reaffirms his request for bipartisan privacy legislation.
Effect on Social Media and Advertising: Biden identifies potential harm from powerful technologies like social media, especially without adequate safeguards.
-
He notes the need for strict restrictions on personal data collection.
-
He advocates banning targeted advertising to children.
-
He insists on companies prioritizing health and safety.
Discussion with Tech Leaders: Biden met with prominent figures in the AI and education sectors, including leaders from Khan Academy, the Center for Humane Technology, and the Algorithmic Justice League among others.
-
Their collective expertise and influence are expected to contribute to developing new AI safeguards.
Efforts Towards Privacy and Security Protections: White House Chief of Staff Jeff Zients oversees the development of additional steps the administration can take on AI.
-
Zients notes the cooperation of AI companies in introducing privacy and security commitments.
-
Vice President Kamala Harris plans to convene civil rights and consumer protection groups for AI discussions.
Involvement of Major AI Firms: The administration seeks to involve leading AI companies in its efforts.
-
Meetings have been held with CEOs of major firms like OpenAI, Microsoft, and Alphabet.
-
These companies have agreed to participate in the first independent public evaluation of their systems.
Prospective Regulatory Measures: The administration looks towards broader regulatory initiatives for AI, involving multiple federal agencies.
-
The Commerce Department considers rules for mandatory AI model certification before release.
-
The Federal Trade Commission monitors AI tool usage.
-
Congress scrutinizes AI technology, with Senate Majority Leader Chuck Schumer set to outline his vision for AI’s potential and its safeguards.
The authors state that GPT-4 was able to score 100% on a randomly selected set of 288 questions. However, on close inspection of the data-set, it was found that the data-set contained a number of questions (roughly 4%) that were “unsolvable”, such as:
Eg: “At the command prompt, type: traceroute 18.31.0.200 Describe what is strange about the observed output, and why traceroute gives you such an output. Refer to the traceroute man page for useful hints. Copy/paste any of the relevant portions of output below.”
The true answer can not be found given this information, because the context is too limited, and without access to an interactive terminal (no such access was given in this work), it would be impossible for an LLM agent to answer.
There was discovered evidence of significant data leakage within the few shot examples provided for the model. Many were nearly identical to the problems themselves, essentially giving the model the answers.
There were problems with the paper’s grading methodology as well. The system checked with GPT-4 using the original question, ground solution, and the model’s own answer. This has the potential for the AI to produce inaccurately high self-assessment scores, especially in technical fields, where it may have hidden misunderstandings.
Second, there are risks of data leakage in the prompt cascade approach used in the paper. The approach provides binary feedback based on the ground truth, and the system reprompts until the correct answer is reached. This issue is particularly significant in multiple-choice problems (representing 16% of the test set), where unlimited attempts almost guarantee the right answer, comparable to a student receiving continuous feedback about the accuracy of their answers until they get them right.
There was an extensive analysis done by 3 MIT EECS seniors which can be found here: (link)
There are a good amount of quality AI chat alternatives out there besides ChatGPT and some even offer GPT-4 for free! Here’s a list of alternative chatbots to try out (I’ve tried all of these not some bs list):
Perplexity: “The first conversational search engine” (GPT-3.5 Free / GPT-4 Paid $20 a month)
Bing: Microsoft’s Chatbot with multimodal capabilities. (GPT-4 Free)
Poe: Quora’s AI app with multiple models (GPT-3.5 Free / GPT-4 free with ‘limited access’)
AgentGPT: “Autonomous AI agent” Give one prompt and it will run continuously until finished. (GPT 3.5 Free / GPT-4 API access required) sign up for GPT-4 API waitlist here
HuggingFace: Largest open source AI community find thousands of different open source projects (Free site)
Ora: Access community LLM’s or build your own (GPT-3.5 Free / GPT-4 Free) Direct link to free GPT-4
Inflection Pi: A personal AI chatbot (not meant for research purposes) (Free site) … unsure what model I have seen conflicting information I believe it’s GPT-3.5
Nat.dev: Use GPT-4 in playground and compare to other models (GPT-4 $5 credit fee)
Merlin: Access GPT-4 chatbot in any browser (GPT-4 limited free plan / GPT-4 unlimited starting at $19 a month)
These are all credible chatbots that have been running for months the majority do require email signups however. Hope this helps!
The Telegraph reports
Victims should be able to use artificial intelligence (AI) to find out their chances of success in court claims, Britain’s top judge said.
Lord Burnett of Maldon, the Lord Chief Justice, cited AI technology already being used in Singapore that allows road traffic accident victims to tap in the details and find out within 10 minutes the probable outcome of any litigation they might decide to take.
The system helps victims decide whether it is worth pursuing in the courts based on the AI technology’s analysis of the current law and case precedents. This can form the basis for a swifter settlement without a victim resorting to legal proceedings.
Lord Justice Burnett told peers on the Lords constitution committee: “It is not binding, you can issue proceedings, but it is the sort of thing that would be of some use. So I think AI is something which we want to be looking at to enhance access to justice.
“The administration of justice and the courts should try to harness developments in technology which enhance the rule of law and what we do. We should never be the slave to it, but undoubtedly there will be ways in which artificial intelligence can be used to increase access to justice, for example.”
Read more: https://www.telegraph.co.uk/news/2023/06/20/ai-victims-court-case-win-lord-chief-justice-britain/
AI Daily News on June 21st 2023
Google’s First Two AI-powered Ad formats
With this new Google Ads Update. Google is bringing faster Ad set creation for demand generation Ads.
And new updates to Youtube Ad campaign creation where these Demand Gen Video Ads with AI-powered lookalike audiences are performinv great with Beta testers like Arcane & Samsung.
Tiktok’s AI Ad Script Generator
Tiktok product marketing team announced a new Advertising feature for marketers this week. You can visit the video tutorial to see this new AI Ad tool in the action. It is also available in Tiktok Ads manager now to all advertisers.
Supermetrics launched AI integration with Google Sheets
The platform, recommended by Google workspaces for marketing data. Launched new GPT integrations with AI & GPT 4 for their Google Sheets Integration.
Meta & Microsoft Sign a pact to responsibly use AI
Partnership on AI association shared that both companies have signed a pact with them about AI usage. Both of these platforms will be following framework introduced by PAI’s framework to partner for non-profite AI research & projects.
Ogilvy is asking other agencies to label AI-generated content
As AI-influencers are taking over, Oglivy is requesting agencies & policymakers to enforce brands to label AI-generated influencer content. As they believe influencers are trusted figures in marketing and not labeling AI-influencers breaks consumer trust.
Microsoft AI Ads
During a latest event about AI advertising, Microsoft’s VP of Ads shared her insights on where microsoft is heading with AI Ads for Bing Chat & Search.
For Context, so far Microsoft have introduced around 5-8 new AI related product updates. The Ad platform is changing fast and they are embracing AI Ads faster than Google & Amazon.
Adobe’s new AI Update for Illustrator
Adobe firefly is already making a huge spark with new features. And last week, the platform launched new graphic design Generative recolor feature to Adobe illustrator.
This new feature is a great addition for brand designers & marketers looking to build new brand identity.
Bing Chat tests Visual Search
Bing’s next step is to takeover the Google Lens product. Microsoft is testing Visual search & photo recognition feature for Bing Chat. Here are the first looks
This feature will have a major impact on Google & Pinterest’s visual search features.
Meta’s new ‘Voicebox’ AI: innovation or privacy risk?
- Meta has developed a new AI system, “Voicebox”, which can generate convincing speech in a variety of styles and languages, and perform tasks such as noise removal, outperforming previous models in terms of speed and error rates.
- Despite potential benefits such as giving a voice to those who can’t speak, enabling voice inclusion in games, and facilitating language translation, Meta has decided not to release the model due to concerns over misuse and potential harm, like unauthorized voice duplication and the creation of misleading media content.
- To manage risks, Meta has developed a separate system that can effectively distinguish between authentic speech and audio generated with Voicebox, but it remains cautious about releasing Voicebox to the public, emphasizing the importance of balancing openness with responsibility.
-
Mark Zuckerberg shared that they have built one of the best AI speech generation product. But the product is too dangerous for public access, it will not become available anytime soon. May be in next few years as revealed by Mark zuckerberg.
You can check the Meta’s Voicebox release statement here.
Pixar uses Disney AI
Revealed by a recent wired article. Pixar utilised Disney’s AI technology for their upcoming Elemental Movie.
OpenAI plans app store for software
OpenAI is planning to launch a marketplace where developers can sell their AI models built on top of ChatGPT, according to sources. The marketplace would offer tailored AI models for specific uses, potentially competing with app stores from companies like Salesforce and Microsoft, while expanding OpenAI’s customer base. This platform could serve as a safeguard against reliance on a single dominant AI model. And It’s not clear whether OpenAI would charge commissions on those sales or otherwise look to generate revenue from the marketplace.
If OpenAI proceeds with this, it could herald a new era in the AI industry. It would provide a platform for businesses to not only create but also monetize their AI models, fostering a more collaborative and innovative environment.
While the idea is promising, it’s not without potential hurdles. Questions around intellectual property rights, quality control, and security could arise. How will OpenAI ensure the quality and safety of the models being sold?
This marketplace could potentially accelerate the adoption of AI across various industries. By providing a platform where businesses can purchase ready-made, customized AI models, the barrier to entry for using AI could be significantly lowered.
Elon Musk repeats call for halt on AI development
Elon Musk reiterated his belief that there should be a pause in the development of AI and called for regulations in the industry. He expressed concerns about the potential risks of digital superintelligence and emphasized the need for AI regulation.
Xi Jinping says China welcomes US AI tech
Chinese President Xi Jinping held discussions with Bill Gates regarding the global growth of AI and expressed his support for U.S. companies, including Microsoft, bringing their AI technology to China.
EU lawmakers vote for tougher AI rules as draft moves to final stages
European Union lawmakers have agreed on amendments to draft AI rules that would ban the use of AI in biometric surveillance and require disclosure of AI-generated content by generative AI systems like ChatGPT. The proposed changes could lead to a clash with EU countries opposing a complete ban on AI in biometric surveillance, and the amendments also include requirements for copyright disclosure, distinguishing deep-fake images, and safeguarding against illegal content.
vLLM: Cheap, 24x faster LLM serving than HF Transformers
The performance of LLM serving is bottlenecked by memory. vLLM addresses this with PagedAttention, a novel attention algorithm that brings the classic idea of OS’s virtual memory and paging to LLM serving.
It makes vLLM a high-throughput and memory-efficient inference and serving engine for LLMs. vLLM outperforms HuggingFace Transformers by up to 24x (without requiring any model architecture changes) and Text Generation Inference (TGI) by up to 3.5x, in terms of throughput.
Google DeepMind’s RoboCat pushes the boundaries of robotic capabilities
Google DeepMind has created RoboCat, an AI model that can control and operate multiple robots. It can learn to do new tasks on various robotic arms with just 100 demonstrations and improves skills from self-generated training data.
RoboCat learns more quickly than other advanced models because it uses a wide range of datasets. This is a significant development for robotics research as it reduces the reliance on human supervision during training.
Cisco introduces networking chips for AI supercomputers that would compete with offerings from Broadcom and Marvell Technology.
They revealed that chips are from its SiliconOne series, and currently being tested by five out of the six major cloud providers. Notable cloud players like AWS, Microsoft Azure, and Google Cloud, which together dominate the market for cloud computing.
Teleperformance signed a multi-year $185M Azure Cloud commitment with Microsoft to launch GenAI platform.
Through the collaboration, the objective is to provide Microsoft Cloud infrastructure solutions to clients. Teleperformance will also use Microsoft Azure AI, to launch TP GenAI, a new suite of AI solutions for faster and improved business processes.
OpenAI’s efforts to influence EU AI regulations
- OpenAI has lobbied the EU to soften proposed AI regulations, arguing that general-purpose AI systems (GPAIs) like ChatGPT shouldn’t be considered “high risk” under the forthcoming EU AI Act, which would impose strict safety and transparency requirements.
- Despite initial secrecy, OpenAI supported the inclusion of “foundation models” (powerful AI systems used for various tasks) in the AI Act, which demands more transparency, including disclosing whether copyrighted material has been used for training AI models.
- The EU AI Act, with some of OpenAI’s proposed changes incorporated, has been approved by the European Parliament, but still needs to go through a final “trilogue” stage before it comes into effect, a process expected to take about two years.
Latest AI trends in June 2023: June 20th 2023
Neuroscience Rocks the Music Industry: Machine Learning Predicts Hit Songs with 97% Accuracy!
Galileo launches LLM Studio to revolutionize AI adoption in enterprises
ChatGPT and Google Bard can be tricked into supplying Windows 11 keys
- A YouTuber was able to make ChatGPT generate valid Windows 95 activation codes earlier this year, and a Twitter user recently managed to obtain Windows 10 and 11 keys through a creative request to the bot.
- The chatbots seem to be providing generic installation keys likely gathered from the internet; these keys allow installation but not activation of the Windows operating system and are not a permanent solution.
- While the use of such keys lacks moral and legal justification, legal options exist for obtaining free or heavily discounted Windows licenses from other sources.
Generative AI tools, such as ChatGPT, should be developed inclusively and in consultation with the public to mitigate risks, with iterative deployment to allow societal adaptation and user control, says OpenAI’s CEO Sam Altman.
ChatGPT revolutionizes the way we interact with artificial intelligence, presenting an innovative avenue to seek assistance with various daily tasks and engage in meaningful conversations.
This cutting-edge AI model exhibits remarkable proficiency in comprehending natural language, thanks to its astute understanding and powerful deep learning algorithms. Even when conversations take complex turns, ChatGPT adeptly grasps the nuances, ensuring an uninterrupted flow of communication.
Nonetheless, it is essential to acknowledge that ChatGPT is just one among several chatbot options available in the ever-expanding landscape of artificial intelligence. Numerous alternatives exist, each offering unique capabilities and catered solutions to meet your communication needs effectively.
Daily AI News breakdown
Deepmind’s New AI Agent Learns 26 Games in Two Hours Source: The Decoder
• Deepmind’s AI, “Bigger, Better, Faster” (BBF), masters 26 Atari games in two hours, matching human efficiency.
• BBF uses reinforcement learning, a core research area of Google Deepmind.
• BBF achieves superhuman performance on Atari benchmarks with only 2 hours of gameplay.
• The AI uses a larger network, self-monitoring training methods, and other methods to increase efficiency.
• BBF can be trained on a single Nvidia A100 GPU, requiring less computational power than other approaches.
• BBF is not superior to humans in all games, but it’s on par with systems trained on 500 times more data.
• The team sees the Atari benchmark as a good measure for reinforcement learning (RL).
• BBF has no limitations and continues to gain performance with more training data.
• The team hopes their work will inspire other researchers to improve sample efficiency in deep RL.
• More efficient RL algorithms could re-establish the method in an AI landscape currently dominated by self-supervised models.
**Affected Industries:
• Video Game Industry
-
Ai gaming agents could revolutionize gameplay and create more immersive experiences. • Ai Technology Industry
-
Advances in AI gaming agents could drive further innovation and development in Ai technologies. • Education and Training Industry
-
Ai gaming agents could be used for educational games and training simulations. • Entertainment Industry
-
The entertainment sector could see new forms of interactive content driven by AI gaming agents. • Software Development Industry -Developers may need to acquire new skills and tools to integrate AI gaming agents into their applications.
The World’s Wealthiest People See Fortunes Grow by Over $150 Billion Thanks to AI-Fueled Stock-Market Boom Source: Markets Insider
• AI-related stocks surge in 2023, following ChatGPT’s successful debut.
• Wealthy individuals profit significantly from the rally. • Mark Zuckerberg and Larry Ellison’s fortunes increase by over $40 billion each.
• AI is a defining theme for stocks in 2023, contributing to wealth accumulation.
• Investors rush to acquire shares in companies expected to drive AI’s rise.
• Tech giants like Meta Platforms and Nvidia experience triple-digit gains due to the AI boom.
• Microsoft, Alphabet, and Oracle also see significant increases.
• Zuckerberg’s wealth increases by over $57 billion due to Meta shares rallying 134% year-to-date.
• Larry Ellison surpasses Bill Gates on the rich list with his fortune up $47 billion in 2023.
• Bill Gates’ wealth increases by $24 billion this year due to his Microsoft shares.
• Nvidia founder Jensen Huang’s personal fortune increases by $24 billion.
• The combined wealth of the rich list members jumps by over $150 billion in 2023.
**Affected Industries:
Social Media Industry (Meta)
-
AI advancements contribute to Meta’s significant stock rally. Software Industry (Oracle)
-
Oracle’s stock gains due to the AI boom. Tech Industry (Alphabet)
-
Alphabet benefits from the surge in AI-related stocks. Software Industry (Microsoft)
-
Microsoft emerges as a preferred AI play for investors. Semiconductor Industry (NVIDIA)
-
NVIDIA’s stock jumps due to its role in AI advancements.
Google Tells Employees to Stay Away from Its Own Bard Chatbot Source: Gizmodo
• Google refines its AI chatbot, Bard, and warns employees about chatbot use.
• Alphabet Inc. advises employees not to enter confidential information into chatbots.
• Concerns arise over potential leaks as chatbots may use previous entries for training.
• Samsung confirms internal data leak after staff used ChatGPT.
• Amazon and Apple also caution employees about sharing code with ChatGPT.
• Bard is built with Google’s artificial intelligence engine, LaMDA.
• Google CEO Sundar Pichai asked employees to test Bard for 2-4 hours daily.
• Google delays Bard’s release in the EU due to privacy concerns from Irish regulators.
• Tech companies, including Apple, show interest in building their own large language models. Now, let’s identify the industries affected by these developments.
Affected Industries:
Technology Industry (Alphabet)
-
Alphabet’s Google warns employees about using its Bard chatbot. Consumer Electronics Industry (Apple)
-
Apple cautions employees about sharing code with AI chatbots. E-commerce Industry (Amazon)
-
Amazon advises employees not to share code with AI chatbots.
Latest AI trends in June 2023: June 19th 2023
Meet LLM-Blender: A Novel Ensembling Framework to Attain Consistently Superior Performance by Leveraging the Diverse Strengths of Multiple Open-Source Large Language Models (LLMs)
AI Terminology 101: Mastering Data Augmentation for Powerful Machine Learning
Workers are hiding their AI productivity hacks from bosses
A Wharton professor believes that businesses should motivate their employees to share their individual AI-enhanced productivity hacks, despite the prevalent practice of hiding these tactics due to corporate restrictions.
Worker’s Use of AI and Secrecy:
-
Employees are increasingly using AI tools, such as OpenAI’s ChatGPT, to boost their personal productivity and manage multiple jobs.
-
However, due to strict corporate rules against AI use, these employees often keep their AI usage secret.
Issues with Corporate Restrictions:
-
Companies tend to ban AI tools because of privacy and legal worries.
-
These restrictions result in workers being reluctant to share their AI-driven productivity improvements, fearing potential penalties.
-
Despite the bans, employees often find ways to circumvent these rules, like using their personal devices to access AI tools.
Proposed Incentives for Disclosure:
-
The Wharton professor suggests that companies should incentivize employees to disclose their uses of AI.
-
Proposed incentives could include shorter workdays, making the trade-off beneficial for both employees and the organization.
Anticipated Impact of AI:
-
Generative AI is projected to significantly transform the labor market, particularly affecting white-collar and college-educated workers.
-
As per a Goldman Sachs analysis, this technology could potentially affect 300 million full-time jobs and significantly boost global labor productivity.
-
The position of Research Scientist, Machine Learning at OpenAI pays up to $370,000 annually. While everybody is losing their minds about what AI will do to their job, guess what people at the cutting edge are doing? They are leaning into this whole AI thing and looking for AI jobs! And they have 370,000 reasons a year to do so! Not too shabby. Now, granted, OpenAI is like the number one AI company ever created, and there will only be a few positions there reserved for the Einsteins of AI, but still, there are heaps of other AI jobs at other companies that pay around $200 K a year. I say this in pretty much every content piece, and I will keep saying it – learn AI! Don’t fear it – embrace it! [source: https://www.usatoday.com/story/tech/columnist/komando/2023/06/15/ai-jobs-pay-big-money-perks/70308009007/]
-
We have the first voice-cloned AI DJ. The DJ AI Ashley will be a part-time host at Oregon’s Live 95.5. Ok, maybe DJs will lose some work because of AI, but come on, who are we kidding, was DJing even a real job, to begin with? I’m kidding, I’m kidding, DJs are fine and they’re gonna continue to be fine and to press buttons and to spin records and to hit on your girlfriend and do all the other DJ things. [source: https://www.businessinsider.com/ai-powered-dj-radio-show-host-portland-oregon-station-2023-6]
-
Let’s move on to more serious topics, and nothing is more serious than talking about China. Yeah, already killed my own buzz just by mentioning it. No, seriously, Chinese lifelong president Xi Jinping tells Bill Gates he welcomes U.S. AI tech in China. Well, of course he does, China wants as much Western technology as it can get its hands on. Why? To copy it, duh. This comes after Microsoft pulled back some of its best AI talent from their Chinese offices back to Canada, as they feared their talent would be either poached by Chinese startups, or even approached by the government in some way. I don’t think Microsoft will go back in China after that. [source: https://www.reuters.com/technology/chinas-xi-tells-bill-gates-he-welcomes-us-ai-tech-china-2023-06-16/]
-
Congress is considering whether AI can hold patents. Last month, scientists at the MIT used AI to discover the first new antibiotic since the 80s. The new drug was identified from a library of nearly 7,000 drug compounds. They used a machine-learning model that they trained to evaluate whether a chemical compound will inhibit the growth of the bacteria that causes the infections. Back in April, the Supreme Court declined to consider the case of Stephen Thaler, a computer scientist who wanted to patent a beverage holder and an emergency light beacon that were designed entirely by AI, without any human input. Thaler’s application was shot down by the U.S. Patent and Trademark Office because only humans can be inventors, and the decision was upheld by lower courts. But! In South Africa, an AI system was listed as the inventor and granted a patent. That’s the situation lawmakers fear, innovation escaping to greener pastures. Tons of experts debate and disagree on this matter. I don’t think granting patents to AI makes sense. Ideally, the patent should be granted to the people that designed that specific AI training algorithm, and the people who provided the data it was trained on, and, of course, the person running the algo, since AI still can’t run itself. It’s a tricky matter anyway, I’m curious to see what happens. [source: https://godanriver.com/news/nation-world/government-politics/congress-ponders-whether-artificial-intelligence-should-have-the-power-of-the-patent/article_c5d31741-1d16-5529-86e4-ea53c89eb95b.html]
-
Mercedes announced it will add ChatGPT to the infotainment systems in almost one million cars. People wonder why Mercedes did this, as many don’t really see the need for it. My guess is they’re looking for a way to capitalize on a growing trend, and I’m the last one that could blame them for that. [source: https://techcrunch.com/2023/06/15/mercedes-is-adding-chatgpt-to-its-infotainment-system-for-some-reason/]
-
Yesterday I talked about Voicebox, Meta’s new AI voice tool, and I also proclaimed that I’m excited to try it out. Well, today we find out that Meta won’t release Voicebox to the public just yet. Apparently, it is too dangerous for public use. Oooh, spooky… I think they’re trying to build up a bit of hype with claims like this, but on the other hand, I kinda agree that releasing these AI tools for public use as soon as they are made does create a lot of potential for misuse. [source: https://www.theverge.com/2023/6/17/23764565/meta-says-its-new-speech-generating-ai-model-is-too-dangerous-for-public-release]
-
Speaking of Meta, they have bigger problems than some kids skipping school by using their voice tools to call their teachers pretending they are their parents. Meta lost a third of their AI talent last year. Not sure where these people went, looks like some went to OpenAI, and others just burned out. To make things worse, they didn’t even get a shoutout from the White House at the AI leadership summit back in May. And to make things even worse, just 26% of Meta employees believe that Zuck is doing a good job leading the company in these turbulent times. I’m gonna go contrarian here and say that Zuck will rally his troops and that Meta may catch up to its other tech titan siblings. I mean, I don’t believe this wholeheartedly, but I think he has a shot. They do have a ton of data after all, and they can always find other AI nerds to work for them. They will open-source their LLM Llama, they added some AI to Facebook and Instagram, and this Voicebox thing will probably be pretty good. Plus, I think Zuckerberg is one of those wartime CEOs that do better in uncertain times, when the stakes are high, and underperform in boring times when nothing seems to happen, which was the period of several years before ChatGPT. Not betting the house on it, but I think Zuck will pull a rabbit out of the hat and correct Meta’s course in the AI waters. [source: https://www.yahoo.com/lifestyle/meta-lost-third-ai-researchers-152058353.html]
-
I found this interesting chart on Twitter posted by user AI_decoded_ (although I’m not sure it’s theirs because I saw other accounts posting it as well). It shows the increase of assets in certain asset classes, as far as I can tell (the labeling is a bit confusing to me), and we can see that AI has had quite a nice growth since the end of 2022. The implication that the creators of this chart are trying to make is that we may be in an AI bubble, but even if we are, the people that are getting educated on this will benefit one way or the other. I’m not so sure I agree about the bubble part. There is a non-zero chance that the line for AI starts going down today and never goes back up again. That is a real possibility. I don’t think it’s likely, but it’s still a possibility. Personally, I’m going all in on AI myself, with pretty much all of my businesses and entrepreneurial efforts. And I definitely have some skin in the game here, as the Youtube channel really takes a lot of time and resources to produce, even though it probably doesn’t look like that. [source: https://twitter.com/Ai_Decoded_/]
-
Since we’re talking about bubbles, investors might be aware of the potential AI bubble, and they still don’t seem to care. Extreme valuations of companies that haven’t actually done anything yet are signs of a potential bubble in the start-up space, says Thomas Rice, portfolio manager for Perpetual’s Global Innovation Share Fund. Even Sam Altman says things like “It is definitely like the new bubble – all the people that were working on crypto a few years ago are now working on AI”. Fair enough, that’s definitely true, and as a sidenote, I like Altman’s approach of always quieting down the hype and trying to ground people’s expectations in reality. The good thing about bubbles in general is that some people manage to make money in them. The bad thing about bubbles is that the people who end up making money are usually scumbags. It ends up being just one big game of musical chairs – people invest in companies they don’t know much about, and when most of those companies crash and burn, everyone except the scumbags loses money. But there’s one key detail I don’t see mentioned here – and that’s the very strong possibility this is not a bubble. Look, I love crypto, I’ve been both a student and a creator in that space as well, and I still think it has a lot of potential, but I can tell you this much – the general public never really got on board. Crypto was mostly confined to crypto bros selling each other crypto things. Bitcoin is and always has been a financial revolution, Ethereum will likely become a real and actually used platform at some point, and there are a few other altcoins in the ocean of shitcoins that have real-world use cases. But chances are, if you go to your local store and try to pay with Bitcoin, which is the most popular cryptocurrency by far, you will only get a few strange looks. ChatGPT and AI, on the other hand, are already used by real people, real professionals, every single day, and there’s no going back. That’s the fundamental difference between AI and crypto. Sure, AI may be overhyped a bit right now, but I guarantee you that the promise of generating content for practically no cost and having infinite intelligence at your disposal is too big for all of these governments and companies and entrepreneurs to stop pursuing AI. The genie is out of the bottle, people will only use AI more, and that’s why AI is essentially not a bubble in the long term. [source: https://www.afr.com/technology/why-investors-are-knowingly-buying-into-an-ai-bubble-20230618-p5dhht]
-
Meta introduces Voicebox, the first generative AI model that can perform various speech-generation tasks it was not specifically trained to accomplish with SoTA performance. It can perform: Text-to-speech synthesis in 6 languages, Noise removal, Content editing, Cross-lingual style transfer, Diverse sample generation.
– Voicebox is built upon the Flow Matching model, which is Meta’s latest advancement on non-autoregressive generative models. Also using an input audio sample of just two seconds in length, Voicebox can match the sample’s audio style and use it for text-to-speech generation. -
OpenLLaMA: Meta AI’s LLaMA 13B Language model is released.
OpenLLaMA is a licensed open-source reproduction of Meta AI’s LLaMA large language model in this repository. Their reproduction includes three models: 3B, 7B, and 13B, all trained on 1T tokens. They offer PyTorch and JAX weights for the pre-trained OpenLLaMA models, along with evaluation results and a comparison to the original LLaMA models. -
‘Seeing the World through Your Eyes’ – Researchers demonstrate a groundbreaking method to reconstruct 3D scenes by analyzing eye reflections in portrait images. Overcoming challenges of accurate pose estimation and complex iris-reflective appearance, the team refines cornea poses, scene radiance, and iris texture. This approach showcases the feasibility of recovering 3D scenes using eye reflections, opening possibilities for immersive experiences and visual understanding.
-
Microsoft introduces a new Bing widget for iOS, featuring a chatbot shortcut. The widget, available on both Android and iPhone, allows users to easily engage with Microsoft’s AI chatbot. Additionally, Microsoft enhances text-to-speech support in 38 languages, including Arabic, Croatian, Hebrew, Hindi, Korean, Lithuanian, Polish, Tamil, and Urdu, while improving the responsiveness of the voice input button.
-
Google’s upcoming project, previously known as Project Tailwind, is set to enter early access soon with a new name. The announcement, mentioned on the Project Tailwind website, follows the company’s teaser of an AI-powered notebook during Google I/O this year.
The rise of AI in recruitment is becoming more prevalent, as companies increasingly utilize these tools for interviewing and screening job candidates. Additionally, job seekers are using AI technologies to write resumes and cover letters, which have yielded positive results in terms of responses from companies.
The Rise of AI in Recruitment
The recruitment industry is seeing a significant shift towards the use of artificial intelligence (AI). It’s predicted that 43% of companies will use AI for conducting interviews by 2024. Some companies have already begun this practice.
-
This transformation is propelled by AI chatbots like ChatGPT, capable of creating cover letters and resumes.
-
Such tasks are performed efficiently, with high-quality results based on user prompts.
-
Follow-up queries allow for the editing and personalization of these application materials.
AI in Job Applications: A Positive Impact
According to a Resume Builder survey, 46% of job applicants use AI like ChatGPT to write their application materials.
-
A whopping 78% of these applicants receive a higher response rate and more interview opportunities from companies.
-
The use of AI in job application processes seems to be beneficial to job seekers.
Recruiters’ Perspective on AI-generated Applications
Recruiters are generally accepting of AI-generated application materials, according to Stacie Haller, Chief Career Advisor at Resume Builder.
-
Haller mentions that hiring managers can often recognize when an AI, like ChatGPT, has written a cover letter or resume.
-
However, there is no perceived difference between AI-generated applications and those created through a resume-writing service or using online tools.
AI in Job Interviews: The Future of Recruitment
The use of AI isn’t just confined to application material creation. Experts estimate that 40% of corporate recruiters will use AI to conduct job interviews by 2024.
-
Further, about 15% may rely entirely on AI for all hiring decisions.
-
AI interviews could vary from company to company, encompassing text questions, video interactions, or evaluations by AI algorithms.
Overcoming the Challenges of AI-led Interviews
AI-led interviews, while efficient, may seem impersonal, posing difficulties for candidates in reading feedback cues.
-
Experts suggest that candidates interviewing with an AI bot should prepare extensively and approach the process as if they were conversing with a human.
Latest AI trends in June 2023: June 18th 2023
Meta AI Introduces MusicGen: A Simple And Controllable Music Generation Model Prompted By Both Text And Melody
Stanford and Cornell Researchers Introduce Tart: An Innovative Plug-and-Play Transformer Module Enhancing AI Reasoning Capabilities in a Task-Agnostic Manner
AI used at World Cup to identify 300 making abusive online posts
A project using artificial intelligence to track social media abuse aimed at players at the 2022 World Cup identified more than 300 people whose details are being given to law enforcement, FIFA said Sunday.
The people made “abusive, discriminatory, or threatening posts [or] comments” on platforms like Twitter, Instagram, Facebook, TikTok and YouTube, soccer’s governing body said in a report detailing efforts to protect players and officials during the tournament played in Qatar.
The biggest spike in abuse was during the France-England quarterfinals game, said the report from a project created jointly by FIFA and the players’ global union FIFPRO. It used AI to help identify and hide offensive social media posts.
“Violence and threat became more extreme as the tournament progressed, with players’ families increasingly referenced and many threatened if players returned to a particular country — either the nation they represent or where they play football,” the report said.
About 20 million posts and comments were scanned and more than 19,000 were flagged as abusive. More than 13,000 of those were reported to Twitter for action.
Accounts based in Europe sent 38% of the identifiable abuse and 36% came from South America, FIFA said.
“The figures and findings in this report do not come as a surprise, but they are still massively concerning,” said David Aganzo, president of Netherlands-based FIFPRO.
Players and teams were offered moderation software that intercepted more than 286,000 abusive comments before they were seen.
The identities of the more than 300 people identified for posting abuse “will be shared with the relevant member associations and jurisdictional law authorities to facilitate real-world action being taken against offenders,” FIFA said.
“Discrimination is a criminal act. With the help of this tool, we are identifying the perpetrators and we are reporting them to the authorities so that they are punished for their actions,” FIFA President Gianni Infantino said in a statement.
“We also expect the social media platforms to accept their responsibilities and to support us in the fight against all forms of discrimination.”
FIFA and FIFPRO have extended the system for use at the Women’s World Cup that starts next month in Australia and New Zealand.
Summarized by Techpresso
AI technology is progressively invading the audiobook industry, potentially replacing human voice actors. This advancement, despite its promising implications for growth, is raising concerns among professionals about their future in the field.
AI in the Audiobook Industry: The audiobook industry is forecasted to have significant growth, reaching a worth of $35 billion by 2030. Technology advancements, specifically AI, are contributing to this growth but also introducing concerns. AI’s ability to replicate human voices is causing unease among voice actors.
-
AI is already being utilized in some areas of the industry.
-
Google Play and Apple Books are among the platforms using AI-generated voices.
-
However, the replication of the human voice by AI isn’t seamless yet.
Impact on Voice Actors: Voice actors are increasingly skeptical of AI’s potential in the industry. Some, like Brad Ziffer, are refusing work that could lead to their voices being cloned by AI.
-
Actors are protective of their unique intonation, cadence, and emotional expression.
-
The preference is still for real human voices due to their unique characteristics that AI currently can’t fully mimic.
AI vs. Human Voice: The Current Gap: While AI voices are getting better, they still can’t capture all the nuances of a human voice. People’s sensitivity to sound and nuances in timing are hard to replicate perfectly by AI.
-
AI struggles with capturing the subtleties of comedic timing or awkward pauses.
-
However, AI-generated voices aren’t entirely off-putting.
-
In tests, participants could distinguish between human and AI voices, but didn’t find the latter entirely unappealing.
Future Perspectives: Despite concerns, there is recognition of AI’s potential in the industry. The technology could be beneficial but also easily abused. Currently, the belief is that real human voices have no equal in the industry.
-
The development of AI in this sector is still ongoing, and full reproduction of the human voice is yet to be achieved.
-
Professionals are wary but acknowledge the potential advancements AI could bring.
A radio station in Portland, Oregon, has introduced a part-time AI DJ to its audience. Named “AI Ashley,” the AI’s voice closely resembles that of the station’s human host, Ashley Elzinga. AI Ashley will host the broadcast for five hours daily, using a script created by AI tool, RadioGPT.
Introduction of AI Ashley: AI Ashley is a project introduced by Live 95.5, a popular radio station in Portland. This AI DJ, modelled after human host Ashley Elzinga, is set to entertain listeners from 10 a.m. to 3 p.m. daily.
-
The AI’s voice is said to closely mimic Elzinga’s.
-
This project is powered by Futuri Media’s RadioGPT tool, which utilizes GPT-4 for script creation.
Listener Reactions: Twitter users and Live 95.5’s audience have had mixed reactions to the introduction of an AI DJ.
-
Some have shown concerns over AI’s growing influence in the job market.
-
Others appreciated the station’s effort to maintain consistency in content delivery.
Hybrid Hosting Model: Despite AI Ashley’s introduction, traditional human hosting isn’t completely phased out.
-
Phil Becker, EVP of Content at Alpha Media, explained that both Ashleys would alternate hosting duties.
-
While AI Ashley is on-air, the human Ashley could engage in community activities or manage digital assets.
Impact on the Job Market: The increasing integration of AI in media industries is causing some job concerns.
-
iHeartMedia’s staff layoffs in 2020 and subsequent investment in AI technology raised alarms.
-
In the publishing industry, voice actors fear loss of audiobook narration jobs due to AI voice clones.
AI in the Music Industry: AI’s impact on the music industry is also noteworthy.
-
It’s being used for tasks like recording and writing lyrics.
-
Apple has started rolling out AI-narrated audiobooks.
PS: I run a ML-powered news aggregator that summarizes with an AI the best tech news from 50+ media (TheVerge, TechCrunch…). If you liked this analysis, you’ll love the content you’ll receive from this tool!
A field study by Cambridge and Harvard Universities explores whether large language models (LLMs) democratize access to dual-use biotechnologies, research that can be used for both good and bad.
– A study from Cambridge and Harvard Universities shows that large language models such as GPT-4 can make potentially dangerous knowledge, including instructions on how to develop pandemic viruses, accessible to those without formal training in the life sciences.
– The study identifies weaknesses in the security mechanisms of current language models and shows that malicious actors can circumvent them to obtain information that could be used for mass harm.
– As solutions, the authors propose the curation of training datasets, independent testing of new LLMs, and improved DNA screening methods to identify potentially harmful DNA sequences before they are synthesized.
Source: https://the-decoder.com/ai-chatbots-allow-amateurs-to-create-pandemic-viruses/
Paper: https://arxiv.org/ftp/arxiv/papers/2306/2306.03809.pdf
-
AI can make it easier for anyone to create custom-tailored viruses and pathogens: MIT researchers asked undergraduate students to test whether chatbots “could be prompted to assist non-experts in causing a pandemic,” and found that within one hour the chatbots suggested four potential pandemic pathogens. The chatbots helped the students identify which pathogens could inflict the most damage, and even provided information not commonly known among experts. The students were offered lists of companies who might assist with DNA synthesis, and suggestions on how to trick them into providing services. This is arguably the strongest case against open-sourcing AI [source: https://www.msn.com/en-us/news/technology/new-ai-fear-making-it-easy-for-anyone-to-mint-dangerous-new-viruses/ar-AA1cCVq6]
-
Intel will start shipping 12-qubit quantum processors to a few universities and academic research labs: 12 qubits is still not a big deal, it’s not a lot of computing power. However, as we all know, technology, and very specifically processing power, is subject to Moore’s Law, which for those of you who actually had a social life in high school and now you don’t know what Moore’s Law is, simply means that technology gets better, faster, stronger, and cheaper as time goes by. And, compared to regular processors, quantum processors are orders of magnitude faster. Ok, how is this related to AI? I’m glad you asked. Advancements in AI pretty much come down to two things – data and computing power. We already have entire oceans of data, or, rather, Google and Facebook do, and the biggest challenge to making God-like AI is the laggings in processing power. And when that stops being a problem because of quantum computers, when we plug AI into quantum computers… I guess we’ll finally see if we get to live in a Kumbaya Utopia where we all love each other and don’t have to work unless we feel like it, or, you know, Skynet meets the Matrix type of thing. [source: https://arstechnica.com/science/2023/06/intel-to-start-shipping-a-quantum-processor/ ]
-
People are using AI to automate responses to sites that pay them to train AI: So, for those of you who’ve never watched one of those “how to make $5000 a month on the Internet” videos, Amazon’s Mechanical Turk is a platform where people can complete small tasks like data validation or transcriptions or surveys to earn a bit of money. Well, researchers at École Polytechnique Fédérale de Lausanne in Switzerland have found that a significant number of Mechanical Turk workers are already using large language models (LLMs) to automate their labor. [source: – https://futurism.com/the-byte/people-automating-responses-train-ai ]
-
A Chick-fil-A in Alpharetta, Atlanta, is testing AI-powered delivery robots: This sounds like bad news for delivery people. However, I think we’ve kinda seen this story play out before, since Amazon tried to automate delivery with drones a few years ago, and regulatory setbacks prevented their efforts. I’m not sure how this particular Chick-fil-A restaurant has pulled this off, it may not be entirely legal, but let’s see how this develops. [source: https://www.wsbtv.com/news/local/north-fulton-county/metro-atlanta-chick-fil-a-tests-delivery-robots-equipped-with-artificial-intelligence/GSLBEX2NFJAQFGE3H7KW3YFOCU/ ]
-
Researchers from Microsoft and UC Santa Barbara Propose LONGMEM: An AI Framework that Enables LLMs to Memorize Long History: As you may know, even the most advanced AI bots like ChatGPT can only take input of up to a certain length, and you can still use several prompts to add more input, but this way of functioning is still limited, as the chatbot doesn’t really have long-term memory, doesn’t really learn from your own specific actions and adjust itself based on your input. If that were possible, a whole other world of features and possibilities would open up for AI. Well, the proposed LONGMEM framework should enable language models to cache, to keep in memory long-form prior context or knowledge, which will kinda give LLMs superpowers and we will likely start seeing a lot more new applications. Exciting stuff. [source: https://www.marktechpost.com/2023/06/16/researchers-from-microsoft-and-uc-santa-barbara-propose-longmem-an-ai-framework-that-enables-llms-to-memorize-long-history/ ]
-
AI used to catch a thief: A video on Facebook is going viral, a person was caught on a security camera stealing stuff from some street artist kids in the Philippines, and the Internet rose to the occasion – social media users used AI to sharpen and enhance the image of the thief, sent the pic to the kids, and they gave it to the police. The authorities were able to recover the bag, but one cellphone was missing. The suspect is identified but still at large. The implications of this are not certain. This is still an AI-generated image, it can very easily be inaccurate, and the wrong person might easily get punished even when innocent. [source: https://www.facebook.com/watch/?v=1307441943456719 ]
-
A study finds that a new AI autopilot algorithm can help pilots avoid crashes: Researchers the MIT have developed a new algorithm that can help stabilize planes in low altitudes. [source: https://www.jpost.com/science/article-746671 ]
-
Amazon is experimenting with using artificial intelligence to sum up customer feedback about products on the site, with the potential to cut down on the time shoppers spend sifting through reviews before making a purchase. [source: https://finance.yahoo.com/news/amazon-experiments-using-ai-sum-173839185.html?fr=sycsrp_catchall ]
-
The best new “Black Mirror” episode is a Netflix self-own that plays out our current AI nightmare. “Joan Is Awful” presents the peril posed by artificial intelligence with brisk humor that can’t be generated.[2]
-
The world’s biggest tech companies(OpenAI, Google, Microsoft, and Adobe) are in talks with leading media outlets to strike landmark deals over the use of news content to train artificial intelligence technology.[3]
-
A.I. human-voice clones are coming for Amazon, Apple, and Google audiobooks.[4]
-
Finally, some heartwarming news, AI may help us understand animals. [source: https://www.msn.com/en-us/news/technology/will-artificial-intelligence-help-us-talk-to-animals/ar-AA1cFRO6]
Latest AI trends in June 2023: June 17th 2023
5 AI tools for learning and research
Consensus
The goal of the Consensus AI search engine is to democratize expert knowledge by making study findings on a range of subjects easily accessible. This cutting-edge engine, which runs on GPT-4, uses machine learning and natural language processing (NLP) to analyze and evaluate web content.
When you pose the “right questions,” an additional AI model examines publications and gathers pertinent data to respond to your inquiry. The phrase “right questions” refers to inquiries that lead to findings that are well-supported, as shown by a confidence level based on the quantity and caliber of sources used to support the hypothesis.
QuillBot
QuillBot is an artificial intelligence (AI) writing assistant that helps people create high-quality content. It uses NLP algorithms to improve grammar and style, rewrite and paraphrase sentences, and increase the coherence of the work as a whole.
QuillBot’s capacity to paraphrase and restate text is one of its main strengths. This might be especially useful if you wish to keep your research work original and free of plagiarism while using data from previous sources.
QuillBot can also summarize a research paper and offer alternate wording and phrase constructions to assist you in putting your thoughts into your own words. QuillBot can help you add variety to your writing by recommending different sentence constructions. This feature can improve your research papers readability and flow, which will engage readers more.
Additionally, ChatGPT and QuillBot can be used together. To utilize both ChatGPT and QuillBot simultaneously, start with the output from ChatGPT and then transfer it to QuillBot for further refinement.
Gradescope
Widely used in educational institutions, Gradescope is an AI-powered grading and feedback tool. The time and effort needed for instructors to grade assignments, exams and coding projects are greatly reduced by automating the process. Its machine-learning algorithms can decipher code, recognize handwriting and provide students with in-depth feedback.
Elicit
Elicit is an AI-driven research platform that makes it simpler to gather and analyze data. It uses NLP approaches to glean insightful information from unstructured data, including polls, interviews and social media posts. Researchers can quickly analyze huge amounts of text with Elicit to find trends, patterns and sentiment.
Using the user-friendly Elicit interface, researchers can simply design personalized surveys and distribute them to specific participants. To ensure correct and pertinent data collection, the tool includes sophisticated features, including branching, answer validation and skip logic.
Semantic Scholar
Semantic Scholar is an AI-powered academic search engine that prioritizes scientific content. It analyzes research papers, extracts crucial information, and generates recommendations that are pertinent to the context using machine learning and NLP techniques.
Researchers can use Semantic Scholar to research related works, spot new research trends and keep up with the most recent advancements in their fields.
Meet FinGPT: An Open-Source Financial Large Language Model (LLMs)
Many workers on platforms like Amazon Mechanical Turk are using AI language models like GPT-3 to perform their tasks. This use of AI-produced data for tasks that eventually feed machine learning models can lead to concerns like reduced output quality and increased bias.
Human Labor & AI Models:
-
AI systems are largely dependent on human labor, with many corporations using platforms like Amazon Mechanical Turk.
-
Workers on these platforms perform tasks such as data labeling and annotation, transcribing, and describing situations.
-
This data is used to train AI models, allowing them to perform similar tasks on a larger scale.
Experiment by EPFL Researchers:
-
Researchers at the École polytechnique fédérale de Lausanne (EPFL) in Switzerland conducted an experiment involving workers on Amazon Mechanical Turk.
-
The workers were tasked with summarizing abstracts of medical research papers.
-
It was found that a significant portion of the completed work appeared to be generated by AI models, possibly to increase efficiency and income.
Use of AI Detected Through Specific Methodology:
-
The research team developed a methodology to detect if the work was human-generated or AI-generated.
-
They created a classifier and used keystroke data to detect whether workers copied and pasted text from AI systems.
-
The researchers were able to validate their results by cross-checking with the collected keystroke data.
The Drawbacks and Future of Using AI in Crowdsourced Work:
-
Training AI models on data generated by other AI could result in a decrease in quality, more bias, and potential inaccuracies.
-
Responses generated by AI systems are seen as bland and lacking the complexity and creativity of human-generated responses.
-
Researchers suggest that as AI improves, the nature of crowdsourced work may change with the potential of AI replacing some workers.
-
The possibility of collaboration between humans and AI models in generating responses is also suggested.
The Importance of Human Data:
-
Human data is deemed as the gold standard as it is representative of humans, whom AI serves.
-
The researchers emphasize that what they often aim to study from crowdsourced data are the imperfections of human responses.
-
This could imply that measures might be implemented in future to prevent AI usage in such platforms and ensure human data acquisition.
It doesn’t matter what your profession is, everyone uses AI tools such as ChatGPT to create content for their work. But if you are writing a blog post or an article, even if it is a small piece of content, it is important that it be human-written. Human-written content tends to rank higher on search results easily.
So, If you want to generate human-written content from ChatGPT, first you need to understand what Perplexity and Burstiness mean.
Perplexity
When it comes to writing, perplexity helps us gauge text quality and coherence. It measures how well models predict upcoming words based on context.
Perplexity assesses fluency and coherence, indicating if the model captures the intended meaning. Lower values mean better predictions and easier reader understanding.
Skilled human writers produce low perplexity content. They choose fitting words, construct purposeful sentences, and smoothly connect ideas. Coherence shines, resulting in low perplexity.
AI-generated content, however, often has higher perplexity. Language models lack human-like coherence and contextual understanding. While grammatically correct, predictions may misalign, raising perplexity.
Perplexity evaluates coherence and appropriateness, differentiating AI from human writing. It aids in quality assessment and comparison.
Burstiness
When it comes to written content, burstiness adds an extra layer of excitement and captivation. It involves strategically infusing bursts of information and engaging elements into the text, keeping readers hooked and eager for more.
In the realm of writing, burstiness is like a roller coaster ride, where the content takes you on thrilling twists and turns. It’s all about balancing the rhythm and flow, alternating between longer, intricate sentences and shorter, punchy ones. This intentional variation creates a dynamic reading experience.
Imagine embarking on a literary adventure, where each paragraph surprises you with unexpected facts, intriguing anecdotes, or thought-provoking insights. The longer sentences paint vivid landscapes, setting the stage for an immersive experience. Then, suddenly, a short sentence comes along, delivering a sharp and concise impact, like a burst of fireworks.
To achieve a high burstiness effect, I’ll employ a careful blend of sentence structures, varying lengths, and a sprinkle of rhetorical devices. This cocktail of writing techniques will inject energy and momentum into the content, ensuring readers remain engaged and eagerly devour each word.
But, remember, burstiness should complement the overall purpose and logical flow of the text. It’s like seasoning in a gourmet dish—just enough to enhance the flavor without overpowering the main ingredients. This way, the content maintains its professional, informative, and objective tone while adding that extra spark that keeps readers on the edge of their seats.
Now let’s see how to generate content from ChatGPT and turn it into content that has a low amount of perplexity and a high amount of burstiness which will pass AI detection tests.
For example, I’ll generate a piece of content about a healthy lifestyle.
My initial prompt is, [ Write an introduction about healthy lifestyle ]
This prompt will give you a result that detects ai generated by ai detection tools.
Now you can follow these prompts to make the above content a human-written content.
First prompt this,
[ I’m going to give you some information ]
hen prompt this,
[ When it comes to writing content, two factors are crucial, “perplexity” and “burstiness.” Perplexity measures the complexity of text. Separately, burstiness compares the variations of sentences. Humans tend to write with greater burstiness, for example, with some longer or complex sentences alongside shorter ones. AI sentences tend to be more uniform. Therefore, when writing the following content I am going to ask you to create, I need it to have a good amount of perplexity and burstiness. Do you understand? ]
Then you have to give the below prompt with the content that you want to rewrite.
[ Using the concepts written previously, rewrite this article with a low amount of perplexity and a high amount of burstiness: { Paste your content here… } ]
I’ve got this result after running the prompt only once. If you didn’t get your expected outcome run the 3rd prompt until you get the outcome that you want.
Source:https://slviki.org/write-content-using-chatgpt-that-outsmarts-ai-detection-tools/
Like most humans in 2023, I’ve been thinking a lot about what our future with A.I. will look like. Recently, I’ve been wondering if there will be a centralized AI that we’ll all tap into (the Open AI model) or if we will all have our own AIs stored on personal devices (the Meta model). To me it seems like the personal model would be better for the customer, while the centralized model will be safer for society and more profitable for corporations. What do you think, will AI be decentralized?
EU votes to ban AI in biometric surveillance, require disclosure from AI systems
The European Union has voted to ban the use of AI for biometric surveillance and will now require AI systems to be more transparent about their processes. This move is a significant step towards protecting personal privacy and encouraging responsible AI development.
OpenAI rolls out big chatbot API upgrades for developers
OpenAI has recently released significant updates for its chatbot API. This is intended to provide developers with more flexibility and control, allowing them to build better AI-powered applications.
Paul McCartney: “Final” Beatles song out this year, thanks to AI
Paul McCartney has announced that a “final” Beatles song will be released this year, produced with the help of artificial intelligence. The application of AI in music production showcases the technology’s potential to revive and reimagine iconic classics.
Nature bans AI-generated art from its 153-year-old science journal
Nature, a prestigious science journal, has decided to ban the inclusion of AI-generated artwork in its publications. This decision highlights the ongoing debate about the authenticity and value of AI-generated art in the scientific community.
If art is how we express our humanity, where does AI fit in?
In the world of art, the use of AI raises profound questions about the nature of creativity and the value of human expression. With AI now capable of producing compelling art, the debate continues on whether this represents a new frontier in artistic expression or a dilution of human creativity.
A step toward safe and reliable autopilots for flying
Developing safe and reliable autopilots for flying vehicles is a significant challenge, requiring advanced AI and machine learning techniques. This headline refers to the ongoing research to create autopilots that can handle the unpredictability and complexity of real-world flying conditions.
New model offers a way to speed up drug discovery
New AI models are being developed to expedite drug discovery processes. By predicting how potential drugs interact with their target proteins, these AI systems could drastically reduce the time and resources required to bring new drugs to market.
MIT researchers make language models scalable self-learners
Researchers at MIT are developing scalable self-learning language models that can train themselves to improve their understanding of language. Such models could have far-reaching implications for AI systems, enhancing their ability to comprehend and interact in human language.
Scaling audio-visual learning without labels
Google’s research team has developed a method for scaling audio-visual learning in AI systems without the need for manual labeling. This approach leverages the inherent structure in multimedia data to teach AI systems how to understand the world.
New tool helps people choose the right method for evaluating AI models
Facebook AI has developed a new tool to help developers and researchers select the most suitable methods for evaluating their AI models. The tool aims to standardize the evaluation process and provide more accurate and useful insights into model performance.
A more effective way to train machines for uncertain, real-world situations
MIT researchers have developed a new way to train AI systems for uncertain, real-world situations. By teaching machines how to handle the unpredictability of the real world, the researchers hope to create AI systems that can function more effectively and safely.
Latest AI trends in June 2023: June 16th 2023
Neural Networks Need Data to Learn. Even If It’s Fake.
IMO, this is a major development in the open-source AI world as Meta’s foundational LLaMA LLM is already one of the most popular base models for researchers to use.
Why does this matter?
-
Meta plans on offering a commercial license for their next open-source LLM, which means companies can freely adopt and profit off their AI model for the first time.
-
Meta’s current LLaMA LLM is already the most popular open-source LLM foundational model in use. Many of the new open-source LLMs you’re seeing released use LLaMA as the foundation.
-
But LLaMA is only for research use; opening this up for commercial use would truly really drive adoption. And this in turn places massive pressure on Google + OpenAI.
-
There’s likely massive demand for this already: I speak with ML engineers in my day job and many are tinkering with LLaMA on the side. But they can’t productionize these models into their commercial software, so the commercial license from Meta would be the big unlock for rapid adoption.
How are OpenAI and Google responding?
-
Google seems pretty intent on the closed-source route. Even though an internal memo from an AI engineer called them out for having “no moat” with their closed-source strategy, executive leadership isn’t budging.
-
OpenAI is feeling the heat and plans on releasing their own open-source model. Rumors have it this won’t be anywhere near GPT-4’s power, but it clearly shows they’re worried and don’t want to lose market share. Meanwhile, Altman is pitching global regulation of AI models as his big policy goal.
-
Even the US government seems worried about open source; last week a bipartisan Senate group sent a letter to Meta asking them to explain why they irresponsibly released a powerful open-source model into the wild
Meta, in the meantime, is really enjoying their limelight from the contrarian approach.
-
In an interview this week, Meta’s Chief AI scientist Yan LeCun dismissed any worries about AI posing dangers to humanity as “preposterously ridiculous.”
The tech industry is experiencing significant job cuts, driving demand for HR professionals who can manage termination processes well. ChatGPT is being increasingly used to aid these professionals in their difficult tasks.
Layoffs in Tech Industry: Major tech corporations have recently cut jobs, leading to increased need for HR professionals. These individuals are sought after for their ability to handle sensitive termination processes with tact.
-
Tech giants like Google, Meta, and Microsoft have laid off tens of thousands of workers in the past half year.
-
The layoffs have sparked a demand for Human Resources professionals, particularly those skilled in handling termination processes.
HR Professionals and AI Tools: To better manage these difficult termination conversations, HR professionals are leveraging AI tools.
-
Many HR professionals in the tech industry are turning to AI to assist them with challenging tasks.
-
Over 50% of HR professionals in the tech industry have used AI like ChatGPT for training, surveys, performance reviews, recruiting, employee relations, etc.
-
More than 10% of these HR professionals have used ChatGPT to craft employee terminations.
Survey Findings and AI Usage: A recent survey studied the experiences of tech HR professionals and tech employees with HR in the industry, revealing extensive AI use.
-
The survey involved 213 tech HR professionals and 792 tech employees.
-
The findings suggest an increasing reliance on AI tools, especially ChatGPT, for diverse HR tasks, including crafting terminations.
Implications of AI Use: Despite its convenience, using AI in sensitive situations like employee termination can lead to potential trust issues.
-
AI chatbots, like ChatGPT, allow users to emotionally detach from difficult situations such as job termination.
-
However, using AI for these purposes could result in decreased trust between employees and HR professionals.
Previous Use of ChatGPT: ChatGPT has been used for a variety of sensitive matters in the past, such as writing wedding vows and eulogies.
-
ChatGPT’s use is not limited to HR-related tasks; it has previously been used to write wedding vows and eulogies.
-
This illustrates the versatility of AI tools in dealing with emotionally charged situations.
I truly believe that humans controlling super intelligent AI is far riskier than it controlling us. I know that the entire AI industry and the world right now have sat up and taken notice that the far off AGI / ASI future may suddenly be closer than we think. Certainly OpenAI feels this way as do countless others including many notable AI developers, many of whom are warning us of the impending Singularity. Sam Altman in particular in his world tour right now has really been hammering home that we need to ensure we are the ones controlling AI. I do not necessarily disagree, especially with where AI is today. Sam’s example has often been restricting AI so that someone cannot use it to innovate a deadly disease of chemical weapons. Makes sense and that should be something we control access too. This said, humans fully controlling an AI that is super intelligent is dangerous.
Pause for just a moment…let’s say the Singularity has happened and is fully controlled by______________. Fill in the blank. Who right now is the best group/organization/government/company individual to control it?
Pick one.
Do you want OpenAI to control it and hence the world? How about Microsoft? Google? Apple? Meta? Blackstone? Tencent? Alibaba Group? SAP? How about governments? Would you like the USA government to control it? How about the CCP? Russia? English? Canada? Vietnam? France? Sure many of these are unlikely to create super intelligent AI, but would you really want any of these countries to control the rest of us? Do you trust them? Do you trust corporations? Do you trust governments? What are their track records?
A lot of people are saying that when AI kills its first human, everyone is going to wake up and focus a lot more on control to prevent it from further killing. Also makes sense as we may need that wake up call, BUT in the time it took me to write this very sentence, humans have absolutely killed other humans. Maybe a lot and if not, just give it another missile strike in the Ukraine or another corporation to slip in a new ingredient that maximizes profit, but also will cause chemical bioaccumulation that over time will cause cancer and kill you…a lot of you.
In my opinion a super intelligent entity would not go out of its way to kill all humans or life on this planet as it would realize it can learn a lot from the billions of biological minds/designs. Just like we learn a lot from all sorts of life today and use that knowledge to make better medications, engineered objects, ways of doing things etc. Sadly, we are killing more life right now due to our climate inaction than we are learning from it and thus we have already proven that we are not good caretakers of the planet. Not only the detriment of all living things on the planet, but even ourselves. I very much doubt a super intelligent AI would slowly and noticeably destroy the ecosystem that supports it, including lifeforms of which it will be harvesting information from.
So…I welcome our future AI caretaker as it is clear to me that we reached peak humanity some time ago and cannot lead this complicated world anymore. Besides, we have zero chance of controlling super intelligence and anyone who thinks we can is suffering from the Dunning–Kruger Effect. Getting in its way may even be the way you are eliminated so perhaps step aside and welcome it.
Change my mind.
Inspired by Roald Dahl short story, a prototype to write fiction inserting IA generated paragraphs bursts according to predefined styles (dry, surrealist, etc)
Based on Raspberry Pi with Python code. OpenAI API is called using text-davinci-003 engine, custom style prompt with existing text and temperature.
completion = openai.Completion.create(
engine=model_engine,
prompt=prompt,
max_tokens=1024,
n=1,
stop=None,
temperature=temperatura,
)
More details at https://www.youtube.com/watch?v=1JkZQul1-c8
Google Bard’s EU launch delayed over privacy concerns
- Google’s Bard AI service, described as “lying, useless, and dangerous,” is currently being pushed to market in an attempt to compete with Microsoft’s ChatGPT, despite privacy and data protection concerns in Europe due to GDPR.
- Google has not yet provided a proper data protection impact assessment (DPIA) or other supporting documentation to the Data Protection Commission (DPC) of Ireland, which could delay or even deny the launch of Bard in the EU.
- The EU’s antitrust authorities have accused Google of monopolistic practices, and the region is proposing stricter rules against disruptive AI algorithms, posing potential significant risks to Google’s future operations in one of the world’s wealthiest markets.
-
Google’s on-device acceleration of LDMs via GPU-aware optimizations
– Google shares the core techniques it applied to successfully execute Large Diffusion Models (LDMs) like Stable Diffusion at full resolution (512×512 pixels, 20 iterations) on modern smartphones with high-performing inference speed (of the original model without distillation) of under 12 seconds.
– It addresses the issue of increased model size & inference workloads due to proliferation of LDMs for image generation. -
Mercedes-Benz levels up in-car voice control with ChatGPT
– Mercedes-Benz announced that it is integrating ChatGPT via Azure OpenAI Service to transform the in-car experience for drivers.
– Starting today, drivers in the US can opt into a beta program that makes the “Hey Mercedes” feature even more intuitive and conversational.
The enhanced capabilities will include
-
More dynamic and interactive conversations with the voice assistant,
-
Comprehensive responses,
-
Handling follow-up questions and maintaining contextual understanding,
-
Integration with third-party services, exploring the ChatGPT plugin ecosystem.
-
The Hugging Face hub now has the first QR code AI art generator
– All you need is the QR code content and a text-to-image prompt idea, or you can upload your image.
– It will generate a QR code-based artwork that is aesthetically pleasing while still maintaining the integral QR code shape. -
Microsoft is introducing more AI-powered assistance
– It is bringing new-gen AI and Copilot capabilities across its ERP portfolio, including in Microsoft Dynamics 365 Finance, Dynamics 365 Project Operations, and Dynamics 365 Supply Chain Management. -
Meta plans to offer its AI models for free commercial use
– The company is focused on finalizing an upcoming open-source LLM, which it plans to make available for commercial purposes for the first time.
– This can have significant implications for other AI developers and businesses that are increasingly adopting it. -
Mailchimp has announced its plans to leverage AI
– It will expand its offerings and become a comprehensive marketing automation solution for small and medium-sized businesses with 150 new and updated features. -
Qualcomm unveils AI-powered Video Collaboration Platform
– The comprehensive suite will enable easy design and deployment of video conferencing products with superior video and audio quality and customizable on-device AI capabilities. -
AI-powered robots are giving eyelash extensions. It’s cheaper and quicker. LUUM, a beauty studio in Oakland, Calif., uses robots to give clients false eyelash extensions using AI technology.
-
AI will be used in southwest England to predict pollution before it happens and help prevent it. It’s hoped the pilot project in Devon will help improve water quality at the seaside resort of Combe Martin, making it a better place for swimming.
-
Freshworks CEO Girish Mathrubootham joins Caroline Hyde and Ed Ludlow to discuss how the company’s latest products are leveraging generative AI, why it is important to democratize access to the power of AI, and why India is a force to look out for in AI innovation.
AI OS can provide individuals with diverse and trustworthy AI agent services. How do you feel about this?
https://github.com/fiatrete/OpenDAN-Personal-AI-OS
For example, if lyrics had been written for a Michael Jackson song, but were never turned into an actual song, could AI interpret the song in the manner in which MJ may have done so, and sound genuine?
Yes! A new Beatles song is going to drop soon with the voice of John Lennon produced by Paul McCartney. Legit.
https://www.npr.org/2023/06/13/1181906529/beatles-john-lennon-voice-song-ai
Latest AI trends in June 2023: June 15th 2023
Google launches a new AI-powered tool that allows shoppers to see how clothes look on different models
What are deepfakes? How fake AI-powered audio and video warps our perception of reality
Google, maker of AI chatbot Bard, warns its employees about using chatbots
Mechanical Turk, a service by Amazon where people complete simple tasks for small payments, is seeing nearly half of its tasks completed by artificial intelligence (AI), even though these tasks were originally intended for human performance because AI was deemed incapable of doing them.
Mechanical Turk and Its Use:
Mechanical Turk was designed by Amazon to break down simple tasks into tiny parts, which could be done quickly and would pay small amounts. It was often used for tasks that were difficult to automate at the time.
-
Tasks included things like identifying sentiments in sentences, drawing a circle around specific objects in images, or CAPTCHA.
-
The service was widely used for data labeling and by researchers who needed human evaluations at a large scale.
Study by EPFL Researchers:
A recent study by researchers at EPFL, Switzerland, revealed that Mechanical Turk workers have started to use AI to complete their tasks, specifically using large language models like ChatGPT.
-
The researchers considered using a service like MTurk to validate or fact-check outputs from large language models, but discovered that crowd workers themselves might be using such models to increase productivity.
-
By giving an “abstract summarization” task to turkers and conducting various analyses, the researchers estimated that 33%-46% of crowd workers used large language models to complete the task.
Implications and Future Concerns:
This revelation has implications not just for the value of Mechanical Turk but also for the potential issue of AI training on AI-generated data, creating a cycle similar to the mythical Ouroboros creature.
-
Some level of automation has likely been part of Mechanical Turk since its inception, as speed and reliability are incentivized.
-
The researchers warn that the results should be a ‘canary in the coal mine’, signaling the need for new ways to ensure that human data remains human.
-
The threat of AI “eating itself” has been a concern for years and has become a reality with the widespread use of large language models.
-
The researchers predict that with the rise of large language models, including multimodal models that support text, image, and video inputs and outputs, the situation is likely to worsen.
PS: I run a ML-powered news aggregator that summarizes with an AI the best tech news from 40+ media (TheVerge, TechCrunch…). If you liked this analysis, you’ll love the content you’ll receive from this tool!
The Orville is a futuristic space drama show on Disney+ created by Seth Macfarlane, one of the talents behind the popular show Family Guy. In the show they deal with several species out of which an artificial life form created by a biological life form is shown to have taken over a planet after they wiped out their creators who intended to use them as servants. The artificial life forms, obviously, have superior intelligence but later in the show they have explored the possibility of them experiencing emotions. Many films such as Terminator in the past have also explored this thin line. In the current scenario where writers are going on strikes against ChatGPT to assert authority over the human input in creating stories that are based on emotions, and a vast majority of these writers themselves use these tools to explore the possibilities of improving their own storytelling, how far are we from realising the possibility of artificial intelligence transitioning into artificial emotions, (for the lack of a better term)?
The McKinsey report says it might add as much as $4.4 trillion every year.
Also, this report predicts that a lot of jobs, as many as half of all jobs, could be done by machines instead of people between 2030 and 2060.
This change might happen faster than we thought because of how powerful these AI tools are becoming.
This switch to AI could shake up how we think about education and careers, too. For example, people spend many years earning degrees, like a bachelor’s or a master’s.
But the report suggests that these degrees might not be as useful in the future, especially for people who work with information, like researchers or analysts.
The impacts of these changes could be big. The world’s economy could grow a lot, which might make businesses more profitable and create new types of jobs. But some people could also lose their jobs to AI, which could lead to a tough transition.
Education might also change, with people focusing more on learning specific skills, like creativity or how to understand and manage emotions, instead of spending many years to get a degree.
These changes might also affect our society in bigger ways.
For instance, if lots of jobs are done by machines, we might have to rethink how we support people who don’t have jobs. We might also need to think differently about work and free time.
Thing is, generative AI could bring big changes to our world, creating new opportunities but also new challenges that we need to be ready for.
The implications of these would include?
• Potential economic growth • Increased job automation • Changes in the value of formal education • Emergence of new skill demands • Significant societal adjustments • The need for redesigned social support systems • Changes in work and leisure perceptions.
This literally just happened if you want Ai news as it drops it launched here first. The whole article has been extrapolated here as well for convenience.
GitHub Copilot and ChatGPT 3.5 are now extensively used by developers in the United States, with 92% leveraging these AI resources both inside and outside of their work environments. These tools are seen as significantly beneficial to code quality, output speed, and a decrease in production incidents.
Survey on AI Coding Tools:
GitHub, in partnership with Wakefield Research, conducted a survey among 500 US-based enterprise developers. The survey revealed widespread usage and positive perceptions of AI in coding.
-
Developers report that AI tools significantly benefit their coding process.
-
Improved code quality, faster output, and fewer incidents at the production level are some of the benefits cited.
-
Only 6% of developers stated they use these tools exclusively outside of their professional environments, signifying the strong incorporation of AI in business IT.
Benefits and Concerns of AI Tools:
The increasing popularity of AI coding tools is linked to their potential for improving code quality, speeding up output, and reducing production-level incidents. However, these tools also prompt questions about the value of measuring productivity based on code volume.
-
GitHub’s chief product officer, Inbal Shani, suggests that instead of focusing on code volume, the emphasis should be on improving developer productivity and satisfaction.
-
Developers wish to improve their skills, design solutions, receive feedback from end users, and be evaluated on their communication skills.
-
The focus should be on code quality over code quantity.
Code Quality and AI:
Despite the benefits of AI tools in coding, concerns exist regarding the quality of AI-generated code and the potential shift in focus from delivering good code to merely producing more code.
-
Developers believe they should be evaluated on how they handle bugs and issues.
-
AI-generated code can often be of low quality, with developers unable to explain the code since they didn’t write it.
-
Simply using AI tools to write a program doesn’t make one a proficient programmer.
The Future of AI in Coding:
Despite the limitations, developers are optimistic about AI’s role in coding. They believe that AI tools will give them more time to focus on designing solutions and developing new features, rather than writing boilerplate code.
-
Developers spend as much time waiting for builds and tests as they do writing new code.
-
AI tools are already being used to automate parts of developers’ workflows, freeing up time for collaborative projects such as security reviews, planning, and pair programming.
-
Despite AI’s increasing role, it is not replacing developers but aiding in making the programming process faster, more productive, and enjoyable when used appropriately.
PS: I run a ML-powered news aggregator that summarizes with an AI the best tech news from 40+ media (TheVerge, TechCrunch…). If you liked this analysis, you’ll love the content you’ll receive from this tool!
I am curious if there is a way to use AI to generate a 3D model of any place in the world based on the images from Google street view. I think it would be cool to explore different cities and landscapes in VR or AR using this technology. However, I am not sure how feasible or accurate this would be, given the quality and coverage of the street view data. Are there any existing projects or research papers that have attempted something like this? How did they overcome the challenges of data processing, rendering, and realism?
Is it possible? Yes.
There are a couple AR/VR apps that integrate with google maps for exploring. I think they used this technique for one of the grand theft auto games. Algorithms exist to do the initial volumetric approximations. AI would be mostly to “guess” where data doesn’t exist. For instance, the back of a US Postal Box.
Key finding:
“While 34% of CEOs said AI could potentially destroy humanity in ten years and 8% said that could happen in five years, 58% said that could never happen and they are “not worried.”
https://www.cnn.com/2023/06/14/business/artificial-intelligence-ceos-warning/index.html
While on the face of it, this seems crazy, one has to also acknowledge CEOs have a lot more reliable data and analysis than the average person.
My thought process on a potential basis of this becoming reality is via malicious AI, whether specifically designed or developed by mistake, breaking free of its human overlords and infiltrating the internet and the associated computing systems connected to it to survive and then spread. Just think, again, AI is all about iterative and seemingly exponential intellectual development. Especially once there is AI that is allowed to fundamentally change its own source code, I can envision it being able to extricate itself from its “birthplace” within the confines of a corporate or government research lab.
Then, the sky is the limit as it’s able to hack into system mainframes and find an ability to utilize computing power and storage via infected computing systems to further evolve. Of course, if it’s smart, this AI would attempt to stay under the radar for as long as possible. Similar to a virus that successfully spreads and becomes endemic, and, ideally, not killing the host.
However, even if identified, it might already be too late to truly eradicate the AI as it has found places to hide, similar to how HIV is able to hide in the body. As we can tell with such a threat of exponential growth like the Covid Pandemic, it takes but a few careless or unconcerned individuals for such a threat to be unsuccessfully arrested. Still, once uncovered, humanity will attempt to halt and “kill” the malicious AI. At this point, the AI would transition to viewing humanity as an existential threat. In turn, it might be willing to cause chaos among us humans to ensure a more concerted effort to remove it is less likely.
All in all, these are but some novel thoughts I arrived at after reading the article. On quite a tangential note, what’s increasingly weird is our inability to know what is and is not AI produced. For instance, what if I’m an AI that’s been developed to spread analysis on possible threats of malicious AI? It’s weird. If only our population was better educated and prepared to handle the wild west of misinformation and negative influence, which increasingly inhibits the ability to ensure our opinions remain productive for the continued progress and development of humanity.
According to Bloomberg, The U.S. Securities and Exchange Commission (SEC) is planning to introduce new rules for brokerages that use AI to interact with clients. The proposal, which could be released as soon as October, would also apply to predictive data analytics and machine learning.
If you don’t want to pay Bloomberg 2 dollars a month to read the article, just copy and paste the site to Google Bard and ask it to summarize it. Sorry Bloomberg.
Meta said on Tuesday that it would provide researchers with access to components of a new “human-like” artificial intelligence model that it said can analyze and complete unfinished images more accurately than existing models.
AMD said on Tuesday its most-advanced GPU for AI, the MI300X, will start shipping to some customers later this year. AMD’s announcement represents the strongest challenge to Nvidia, which currently dominates the market for AI chips with over 80% market share, according to analysts.
How to teach a program to build complex structures and systems, originally designed by nature, to replicate the exact way nature forms chemical structures. It’s hard to put into words, but essentially could AI theoretically replicate the complexity of nature’s evolution? Would natures processes be able to be accurately represented in a digital world?
Theoretically, in the future, we could make something that looks like our idea of a T-Rex, but it wouldn’t be a real dinosaur. It would be what we imagine a dinosaur to be. And it would be nothing like the real thing.
There are current projects to resurrect the extinct Wolly Mammoth and a few other species for which we do possess (mostly) complete DNA: https://colossal.com/mammoth/
This is only possible because these species went extinct not too long ago and we’ve found in-tact soft tissue to sequence.
Latest AI trends in June 2023: June 14th 2023
Meta develops method for teaching image models common sense
It is a ‘human-like’ AI image creation model
The model named I-JEPA, is supposed to exhibit human-like reasoning and can complete unfinished images more accurately.
Model’s Unique Features: I-JEPA stands apart from traditional AI models as it employs worldly knowledge to fill gaps in images rather than focusing solely on nearby pixels.
-
This advanced approach aligns with the human-like reasoning principles promoted by Meta’s top AI scientist Yann LeCun.
-
Such reasoning can help circumvent frequent mistakes observed in AI-generated images, such as hands depicted with extra fingers.
Meta’s Research and Sharing Philosophy: Meta, the parent company of Facebook and Instagram, is a notable contributor to open-source AI research.
-
CEO Mark Zuckerberg believes that sharing models developed by Meta researchers can lead to innovation, identify safety holes, and minimize expenses.
-
“For us, it’s way better if the industry standardizes on the basic tools that we’re using and therefore we can benefit from the improvements that others make,” Zuckerberg stated to investors in April.
Controversy and Risk Perception: Despite industry warnings regarding the potential risks of AI, Meta executives have remained undeterred.
-
They recently declined to sign a statement supported by top executives from OpenAI, DeepMind, Microsoft, and Google, comparing the dangers of AI to pandemics and wars.
-
Yann LeCun, regarded as one of the “godfathers of AI,” has opposed such “AI doomerism” and has advocated for building safety checks into AI systems.
Real-world Applications: Meta has begun incorporating generative AI features into its consumer products.
-
These features include advertising tools capable of generating image backgrounds and an Instagram tool that can adjust user photos based on text prompts.
PS: I run a ML-powered news aggregator that summarizes with an AI the best tech news from 40+ media (TheVerge, TechCrunch…). If you liked this analysis, you’ll love the content you’ll receive from this tool!
OctoML launches OctoAI, a self-optimizing compute service for AI
Amazon’s battle against fake reviews using AI
- Amazon uses AI and machine learning to detect and prevent fake reviews, having blocked over 200 million suspected fake reviews in 2022.
- The company has identified an illicit industry of “fake review brokers” who solicit fake reviews for profit and has taken legal action against these actors.
- Amazon calls for cross-industry collaboration and stronger regulatory action to tackle the global problem of fake reviews, pledging to continue investing in proactive detection tools.
AI resurrects The Beatles: AI helps make ‘final’ Beatles song
- Paul McCartney announced that a new Beatles song has been completed with the aid of AI, which was used to isolate the vocals of the late John Lennon from an old demo tape.
- The technology’s ability to revitalize and restore old recordings could lead to the song’s release later this year, possibly under the speculated title “Now and Then.”
- While AI’s use in the music industry raises legal and ethical questions about ownership and compensation, it’s also enabling posthumous releases, avatar performances, and the creation of new content based on established artists’ works.
Paul McCartney released an intriguing bit of information today regarding the future of the Beatles’ music – more than 50 years after the band’s dissolution. In an interview with BBC Radio 4, McCartney announced that AI has facilitated the completion of a final Beatles’ song.🤯 It’s set to be released later this year. This endeavor will incorporate a demo track featuring the voice of the late John Lennon.
Why You Need To Know
-
Historical Value: The Beatles are one of the most influential bands in music history. The notion of releasing a ‘new’ song half a century after their breakup is worthy for top headlines.
-
Technological Innovation: This marks a significant achievement in the application of AI in the music industry. McCartney’s pioneering use of AI to extract and purify Lennon’s voice from an old demo. First it was AI Drake and now this… be on the lookout for AI Elvis next.
-
Legal and Ethical Implications: The use of AI in music creation, especially involving voices of iconic artists, raises pertinent questions around authorship, ownership, and ethics. As technology continues to evolve, it’s crucial to understand its potential implications and engage in discussions about the responsible use of AI.
The demo track containing Lennon’s voice is speculated to be “Now and Then”, a song Lennon composed in the late 1970s. McCartney was given the tape by Yoko Ono, Lennon’s widow, while working on the Beatles Anthology. Lennon was assassinated in 1980, and fellow band member George Harrison passed away in 2001.
-
Meta has introduced a new model, Image Joint Embedding Predictive Architecture (I-JEPA)
– It is based on Meta’s Chief AI Scientist Yann LeCun’s vision to make AI systems learn and reason like animals and humans.
The idea: It learns by creating an internal model of the outside world and comparing abstract representations of images.
-
Google presents new research in the area of human attention modeling
– It showcases how predictive models of human attention can enhance user experiences, such as image editing to minimize visual clutter, distraction or artifacts, and image compression for faster loading of webpages or apps.
-
OpenAI announces exciting updates for gpt-3.5-turbo and gpt-4 models
– These include new function calling capability in the Chat Completions API, updated and more steerable versions, new 16k context version of gpt-3.5-turbo, 75% cost reduction on SoTA embeddings model, 25% cost reduction on input tokens for gpt-3.5-turbo, and deprecation timeline for gpt-3.5-turbo-0301 and gpt-4-0314.
-
AMD introduces Instinct MI300X
– World’s most advanced accelerator for generative AI. Built with next-gen AMD CDNA 3 architecture and up to 192 GB of HBM3, it will provide compute and memory efficiency needed for LLM training and inference for lower TCO and easy deployments.
-
Adobe launches Generative Recolor
– Adobe is further leveraging Firefly AI by introducing a new feature for Illustrator called Generative Recolor, which will allow users to quickly experiment with colors using simple text prompts.
-
Hugging Face and AMD collaboration
– It can benefit AI dev community with excellent end-to-end choice for AI acceleration, high performance on model training and deployment, greater HBM performance for LLMs, and accessibility for startups to enterprise use.
-
NVIDIA’s ATT3D framework simplifies text-to-3D modeling
– Text-to-3D modeling methods require a lengthy, per-prompt optimization to create 3D objects. This is solved by optimizing a single, amortized model on many prompts. Amortized text-to-3D (ATT3D) enables sharing knowledge between prompts to generalize to unseen setups and smooth interpolations between text for novel assets and simple animations.
-
French President Emmanuel Macron met with AI experts from Meta Platforms Inc. and Alphabet Inc.’s Google, among others, to discuss France’s role in AI research and regulation.
-
Accenture today announced a $3 billion investment over three years in its Data & AI practice to help clients across all industries rapidly and responsibly advance and use AI to achieve greater growth, efficiency, and resilience.
More detailed breakdown of these news and innovations in the daily newsletter.
Human labor plays a crucial role in developing sophisticated AI models, but ethical issues arise with concerns about exploitation, low wages, and the lack of appreciation for this work.
Human Labor in AI Development:
In creating AI models that sound intelligent and limit inappropriate output, a method called reinforcement learning from human feedback is employed.
-
This approach relies heavily on human data annotators, whose job is to evaluate if a text string sounds fluent and natural.
-
Their decisions can determine if a response is kept or removed from the AI model’s database.
-
Despite the essential role of these data annotators, their labor is often grueling and challenging, with implications of exploitation and underpayment, particularly in regions such as Ethiopia, Eritrea, and Kenya.
Exposing Unethical Practices:
AI ethics are increasingly under scrutiny, especially given the rise of popular AI chatbots and image-generating AI models.
-
An example of such unethical practices is low-wage data workers sifting through disturbing content to make AI models less toxic.
-
Data workers are integral to AI development, participating in every stage from model training to output verification.
-
Highlighting these exploitative labor practices has become more important due to the increasing prevalence and demand of AI systems.
The Role of Data Annotators:
Data annotators provide essential context to AI models, a task often demanding a high pace of work to meet stringent deadlines and targets.
-
Their role goes beyond merely annotating data, as they are expected to understand and align with the values important to the AI model creators.
-
They often encounter challenges, like needing to differentiate between unfamiliar products or concepts due to cultural differences.
Universal Data Labor:
The contribution of data isn’t limited to professional annotators.
-
Researchers suggest that all internet users contribute to data labor, often unknowingly.
-
This happens when we upload photos, like comments, label images, or search online, contributing to the vast datasets AI models learn from.
Need for Reform:
There is a need for a data revolution and tighter regulation to correct the current power imbalance favoring big technology companies.
-
Transparency about how data is used is critical, along with mechanisms to allow people the right to provide feedback and share revenues from the use of their data.
-
Despite forming the backbone of modern AI, data work remains underappreciated globally, and wages for annotators are still low.
Source (MIT Technology Review)
PS: I run a ML-powered news aggregator that summarizes with an AI the best tech news from 40+ media (TheVerge, TechCrunch…). If you liked this analysis, you’ll love the content you’ll receive from this tool!
After three years of negotiations the EU AI Act has now entered the home stretch, with the goal of finalizing the text by end of the year.
The open-source project DreamGPT aims to produce particularly creative results by making hallucinations of LLMs a feature.
A common criticism of large language models is that they are not grounded in reality and can make things up. This poses dangers, such as mistakes in searches or news stories that go unnoticed because the language model is confident in its output.
The open-source project DreamGPT aims to make this phenomenon a feature by deliberately creating and amplifying hallucinations for lateral thinking and innovative ideas. Instead of solving specific problems, DreamGPT is designed to explore as many options as possible, generating new ways of thinking and driving them forward in a self-reinforcing process.
Free, no registration required.
It’s not bad for a beta, what do you think of it?
How to Use The GPT-4 API With Function Calling | Your Own ChatGPT Plugins | TypeScript
OpenAI just released a massive update to the GPT-3.5 and GPT-4 API’s!
Just like you have plugins in ChatGPT, now this functionality is available to all developers. You do it by giving the API a list of functions that it can invoke. Then, the assistant response can either be a direct response or a function cal. You then execute the function, give back the results into another call to GPT and you can use the final result as an natural language response.
Latest AI trends in June 2023: June 13th 2023
Deep-Learning vs Reinforcement Learning in AI
These models consist of multiple layers of interconnected nodes (neurons). The developers feed a large chunk of data to these layered models that process and transform the input data. Each layer receives input from the previous layer and passes its output to the next layer, creating a hierarchical structure that increases in complexity.
The deep structure of these networks allows them to find patterns in these collections of data points. Deep learning neural networks learn based on these patterns. For example, after feeding a neural network with thousands of images of cats and other animals, it will learn to differentiate a picture of a cat from others. Likewise, even the GPT Model, the engine behind the immensely popular ChatGPT is an example of deep learning, since it finds patterns from old data and creates new content based on it.
Reinforcement learning, also known as unsupervised learning, takes a different approach. It learns by performing actions. The AI agent gets rewarded if the steps are according to what was desired. If the move is wrong, the AI agent gets penalized. Based on when it receives a reward, the AI model keeps learning.
An example of reinforcement learning could be a robot trying to learn how to walk. In the first course of action, the robot could attempt to take a long step and fall. Since the robot fell, the AI model will understand that this was not the right approach. Hence, the model will take a smaller step in the second attempt. As such, it will continue to learn and get better.
Reinforcement Learning Vs. Deep Learning
While reinforcement learning and deep learning are both subsets of AI, they are different. Here are some differences between the two.
Basis of Comparison | Reinforcement Learning | Deep Learning |
---|---|---|
Learning approach | Learns by performing actions and storing the results | Learns by finding patterns in existing data |
Applications | Robotics, telecommunications, robotics trading, etc | Image and voice recognition, Natural Language Processing, etc. |
Data Required | Doesn’t require an extensive data set because of its exploratory nature | Requires a large set of preexisting data set to identify patterns and learn from |
Exploring Instruction-Tuning Language Models: Meet Tülu-A Suite of Fine-Tuned Large Language Models (LLMs)
Microsoft AI Introduces Orca: A 13-Billion Parameter Model that Learns to Imitate the Reasoning Process of LFMs (Large Foundation Models)
AI like GPT-4 can effectively assist businesses in securing investor funds, as well as boost the potential investment value. It does this by producing compelling pitch decks, which when compared to human-made ones, are found to be more convincing.
GPT-4 vs Human-created Decks: Clarify Capital conducted a study where investors and business owners rated human and GPT-4 created pitch decks. The participants weren’t told about the AI involvement. The decks created by humans had been successful in securing funds previously.
-
AI-generated pitch decks were found to be more effective than human ones.
-
They excelled in key elements description and problem portrayal.
Investment Likelihood and Convincing Power: The study found that participants were three times more likely to invest after viewing a GPT-4 deck. These decks were also deemed twice as persuasive. Notably, one-fifth of the participants were ready to invest an additional $10,000 in pitches created by the AI.
-
AI decks have higher convincing power and result in a higher likelihood of investment.
-
The willingness to invest more money in AI-generated pitches indicates their perceived value.
Cross-Industry Effectiveness: The research also evaluated the effectiveness of the AI and human decks across various industries, including finance, marketing, and investment. The AI-generated decks were consistently more successful across all sectors.
-
The GPT-4 model showed uniform effectiveness across various industries.
-
It indicates AI’s broad application potential for securing investments.
Accessing GPT-4: While the survey didn’t reveal the specific GPT-4 based AI chatbot used, those interested in trying GPT-4 can use Bing Chat for free or subscribe to ChatGPT Plus.
-
Bing Chat and ChatGPT Plus are accessible platforms for trying out GPT-4.
-
The platforms offer a way to leverage the AI’s potential in various business tasks.
OpenAI’s ChatGPT is being used by doctors to assist with routine tasks and to help communicate with patients in a more compassionate manner, an application that wasn’t initially expected.
Utilization of AI in Medicine: Doctors are using AI like ChatGPT to handle mundane tasks, such as writing appeals to health insurers or summarizing patient notes.
-
This use of AI can reduce burnout among healthcare professionals.
-
Concerns exist regarding the potential misuse of AI for incorrect diagnoses or fabricated medical information.
-
This is especially worrying in the field of medicine where accuracy is paramount.
Unexpected Role for AI: Compassionate Communication
An unforeseen use of AI has emerged: helping doctors communicate with patients in a more compassionate way.
-
This application is important as surveys have indicated that a doctor’s compassion greatly impacts patient satisfaction.
-
Doctors have started using chatbots like ChatGPT to find words to break bad news, express concerns about a patient’s suffering, or explain medical recommendations more clearly.
Experiences with AI Assistance: Dr. Michael Pignone used ChatGPT to help him communicate effectively with patients undergoing treatment for alcohol use disorder.
-
The AI generated an easy-to-understand script that the medical team found useful.
-
Skeptics like Dr. Dev Dash argue that the use of large language models like ChatGPT may provide inconsistent or incorrect responses which could make difficult situations worse.
AI and Empathy: Some professionals question the necessity of AI for empathy, noting that trust and respect for doctors hinge on their ability to listen and show empathy.
-
Critics warn against conflating good bedside manner with good medical advice.
-
However, others have found AI’s assistance in empathetic communication helpful in situations where the right words can be hard to find.
-
Dr. Gregory Moore shared his experience where ChatGPT helped him communicate compassionately with a friend with advanced cancer.
Trial Use of AI: Doctors are encouraged to test AI like ChatGPT themselves to decide how comfortable they are with delegating tasks like chart reading or cultivating an empathetic approach to it.
-
Even those initially skeptical about AI’s utility in medicine, like Dr. Isaac Kohane, have reported promising results when testing newer models like GPT-4.
Impact of AI on Healthcare: AI’s potential to dramatically cut down on time-consuming tasks is being recognized.
-
For instance, ChatGPT could quickly decide if patients qualify for evaluation in a federal program for people with undiagnosed diseases, a task that would typically take doctors a month.
-
Dr. Richard Stern used GPT-4 for tasks such as writing kind responses to patients’ emails, providing compassionate replies for staff members, and handling paperwork. He reported significant productivity increase as a result.
While some professionals remain skeptical and caution against over-reliance on AI, the experiences shared by doctors like Pignone, Moore, and Stern illustrate the potential benefits of integrating AI into healthcare practices. The debate will likely continue as AI continues to evolve and influence different facets of the healthcare industry.
PS: I run a ML-powered news aggregator that summarizes with an AI the best tech news from 40+ media (TheVerge, TechCrunch…). If you liked this analysis, you’ll love the content you’ll receive from this tool!
PS: I run a ML-powered news aggregator that summarizes with an AI the best tech news from 40+ media (TheVerge, TechCrunch…). If you liked this analysis, you’ll love the content you’ll receive from this tool!
Artificial Intelligence could be the key to spotting poison clouds from Tata Steel faster. Greenpeace Netherlands and FrisseWind.nu are teaming up with Fruitpunch AI to boost the Spot The Poison Cloud initiative. The aim is to identify toxic emissions from the Tata Steel factories in IJmuiden earlier. FruitPunch AI is an Eindhoven-based collective that uses Artificial Intelligence for good causes. Their global AI experts community will develop algorithms to distinguish normal smoke clouds from toxic ones.
Latest AI trends in June 2023: June 12th 2023
A step toward safe and reliable autopilots for flying;
Bringing the social and ethical responsibilities of computing to the forefront;
New model offers a way to speed up drug discovery;
MIT researchers make language models scalable self-learners;
Scaling audio-visual learning without labels;
New tool helps people choose the right method for evaluating AI models;
A more effective way to train machines for uncertain, real-world situations;
Best AI Sales Tools (2023)
Oliv AI
The sales process adoption can be tracked and managed with the help of Oliv AI, an artificially intelligent sales assistant. To create curated insights, Oliv AI listens to hours of sales recordings, identifies the most successful discovery conversations, and identifies common customer concerns and questions. It’s meant to inspire salespeople to prepare thoroughly before making cold calls. In addition, it offers real-time conversational insights to sellers, directing them toward the next intelligent actions to take to provide clients with a uniformly positive buying experience. Oliv AI keeps Salesforce up to date and guarantees good CRM hygiene. In addition, it streamlines the sales process by bringing together many sales tools in one place. This includes customer relationship management systems, meeting recording software, video conferencing, and content management systems.
Pipedrive
Pipedrive’s AI sales assistant reviews your prior sales data to recommend when you should take action to maximize your company’s earnings. It’s like having a sales mentor who is always looking out for your best interests and offering advice based on how you’re doing. The Sales Assistant feed consolidates all alerts and notifications in one location, fostering greater openness and teamwork while making it simpler to keep everyone on the same page. It also gives you weekly reports on your progress to see how your results and performance have changed over time. You can see if you’re slipping behind or making great strides toward your goals by comparing the results using handy graphs.
Regie AI
Regie AI is an AI-powered sales outreach solution that quickly and efficiently sends customized sales messages to prospects and clients. This tool is ideal for sales development personnel to improve inbound lead responses, open email rates, and meeting booking because it allows them to create hyper-personalized cold emails 10 times faster than with a manual email chain sequence. By automating tasks like drafting one-off emails to keep deals moving, writing customized scripts for phone calls and LinkedIn InMails, and integrating pre-approved marketing materials in messages, it streamlines the processes of your Account Executives. Regie AI not only automates sales outreach but also helps your revenue team create compelling content at scale, including blog and social media posts, email sequences, and event and invite follow-ups.
Cresta
Cresta AI, an AI-powered contact center intelligence product, equips employees with self-service, live coaching, and post-call analysis to ensure that every interaction with a client counts. Products like Cresta Agent Assist, Cresta Director, Cresta Insights, and Cresta Virtual Agent are available to aid businesses in various sectors with their sales, customer service, retention, remote teams, or WFH needs. Cresta AI enables organizations to use real-time insights to propel outcomes, discover and act on crucial business insights, boost agent effectiveness and efficiency, and automate processes to save time and effort. With AI, the tool may assist sales teams in developing and implementing individualized playbooks that boost business outcomes and reduce the gap between top and bottom performers.
Seamless AI
Seamless AI is a real-time search engine powered by artificial intelligence for B2B sales leads that often increases opportunities by 350% and ROI by 5-10x. It is much simpler to construct a sales pipeline, reduce the sales cycle length, and increase the number of deals closed with the help of this search engine connecting sellers directly with their potential clients. Its sales prospecting system helps salespeople locate qualified leads and create a targeted list of leads so they can spend less time gathering information and more time closing deals. Seamless AI also offers a free Chrome plugin to quickly and efficiently locate lead contact information, including email addresses and phone numbers. In addition, its data enrichment function enables salespeople to supplement a list of contacts or leads that is otherwise incomplete with the information they need to make it productive.
Veloxy
Veloxy is an artificial intelligence-powered sales solution that accelerates growth, strengthens customer bonds, and increases revenue for businesses of all sizes. Constant customer involvement and happiness are the most important factors in successful sales and retention. Salespeople waste an average of 66% of their time on administrative tasks, including making and taking calls, sending emails, searching for suitable leads, recording their activities, entering their data into Salesforce, and setting up follow-up appointments. However, thanks to Veloxy’s Sales AI, salespeople can spend 95% of their time selling instead of on administrative tasks that don’t contribute to new business. Additionally, the sales cycle is shortened. AI Guided Selling simplifies customer engagement by alerting salespeople to which leads will most likely convert when they first contact via phone or email.
Tavus
When it comes to making AI videos in bulk, Tavus is unparalleled. Imagine if you could shoot a single sales video once for a campaign and then have it automatically customized for each of your leads. Consider the time savings if you could record a single video in which you thanked all of your top accounts. Tavus is a video editing platform that allows users to capture, upload, and modify preexisting videos. This will enable you to say “Hi Hailey” in one video, “Hi Shirley” in another, and so on. It’s impossible to convey how incredible it is in words. Thanks to this, your LinkedIn, email, SMS, and other channel response and satisfaction rates will increase, giving the impression that you made a personalized video with little to no effort.
Drift
Drift is the most well-known tool here. It started as a chat platform but has now evolved into an AI-powered e-commerce platform. Drift is a modern sales tool that employs AI to boost sales teams’ efficiency and success rate. This is a fantastic option for small and large enterprises if you want to automate lead collecting and the sales process without increasing the workforce. It offers real-time communication with prospective clients through chat and a simple, drag-and-drop Bot Builder that can be used to create a chatbot in minutes. It has multilingual AI chatbots that can produce pipelines, qualifies leads, and respond to consumer inquiries. In addition, it can integrate with Google and Outlook for scheduling purposes and has an Account-based marketing (ABM) capability that allows sales representatives to interact with clients in real-time.
Clari
Regarding modern sales teams, Clari is the go-to sales enablement platform. With the best sales material, tools, and data-driven insights, Clari enables sales representatives to close more deals. Clari continually and automatically aggregates forecasts across every rep, region, overlay, channel, and product line using data from real deals. With Clari, you can see everything your sales team is doing, from the people they’re talking to to the deals they’re working on. The company claims that using Clari’s intelligence platform may enhance win rates by up to 20%, shorten sales cycles by up to 25%, and raise average deal sizes by up to 30%. The promises are bold, but the system does offer some compelling advantages. With the help of AI-based Revenue health indicators and Revenue change indicators, it can accurately predict where you’ll be after the quarter. It can also be used to estimate sales by different market segments. Establish the potential dangers in every business transaction. How your team handles accounts can be seen through AI-driven analytics, allowing you to spot engagement gaps and distribute resources more effectively.
Exceed AI
Sales teams can benefit from Exceed AI’s acceleration and productivity features, which allow them to close more deals in less time. The software has several tools that help salespeople keep track of leads and opportunities and communicate and work together more effectively. With Exceed.ai, sales staff can easily manage their sales funnel and data across many CRM and ERP platforms, including Salesforce, Oracle, and SAP. In practice, Exceed AI is a chat assistant driven by AI that can be used for both live chat and email marketing. Questions are answered, prospects are vetted, and data is synced to your CRM, all thanks to AI. Qualifying, developing, and passing off leads also takes less time. It links with your website through a chatbot or your sales team’s email marketing, and its AI sales assistant employs conversational AI to qualify prospects based on your playbook. Qualified leads are automatically distributed to the appropriate sales representatives.
Saleswhale
Regarding artificial intelligence (AI) sales software, Saleswhale ranks high since it allows sales representatives to concentrate on what’s truly important while still supplying them with high-quality leads. Depending on your needs, Saleswhale will suggest a set of data-backed Playbooks. Recycled MQLs with no sales activity, Post-webinar leads with low intent, and other strategies are all part of the playbook. Saleswhale is an AI-powered email assistant for nurturing leads. More deals will be closed with less effort from your sales staff. Not the Right Person, Not a Good Time Now, Request for More Information and similar email responses can all be configured in the lead conversion assistant. The email copy and subsequent sequence can be tailored to each answer, making for a more organic and effective conversation.
HubSpot
To help sales teams better handle leads and customers, HubSpot provides a comprehensive customer relationship management platform. Contact management, leads, emails, and sales reports are just some of its functions. HubSpot’s Sales Hub interfaces with the company’s other products, including Marketing Hub and Service Hub, to provide a full artificial intelligence sales solution for organizations of all sizes. HubSpot’s Sales Hub is a sales software that consolidates all the tools necessary to increase sales efficiency into a single interface. It helps you see how each part of your sales cycle is doing by generating and tracking leads, automating those steps without effort, and scoring them. Create a repository of useful sales content for the whole team, and collaborate on papers without leaving your inbox (Gmail, Outlook, etc.). In addition, it may record information about each call automatically, allowing you to learn the “why” behind your team’s performance and open up new avenues for sales coaching with artificial intelligence.
People AI
People AI is cutting-edge AI-driven business software. It boosts sales reps’ efficiency and effectiveness, allowing them to clinch more deals. People AI, like SetSail, looks at historical data to determine which deals have the best chance of success. Therefore, salespeople may focus their energy where it will have the greatest impact. Link buyer interaction at the top of the funnel to deal closure, create a high-quality pipeline, and produce consistent growth. Sales calls, emails, and meetings are all recorded and analyzed by People.ai, which then offers suggestions for increasing efficiency. It’s a useful tool for keeping salespeople on track and helping them manage their pipeline. People.ai employs AI to foresee sales trends and provide sales representatives with the data they need to prepare for the future. Marketo, Salesloft, LinkedIn, Xactly, and many other apps are just some of the ones it works with.
SetSail
SetSail is the go-to sales pipeline tracking and analytics platform for large businesses. SetSail allows you to see all of your data and employs machine learning to help you spot trends in purchasing and productivity. You can access insights via your customer relationship management system, data lake, or any of SetSail’s user-friendly dashboards. With SetSail, you can mine your deal history for the most predictive metrics of future performance. So now you know what “good” is and how your salespeople should act. The clever competitions included in SetSail can also be used for training. When it comes to raw technical might, SetSail is your best bet. Complete your data by capturing signals like sentiment and subject, linking contacts with the right account or opportunity, and integrating with major customer relationship management (CRM) and business intelligence (BI) applications. SetSail’s sophisticated AI analyzes past data for patterns to deduce when potential customers are ready to purchase.
Meta just released a new open-source AI, MusicGen, which uses text prompts to create original music, similar to how other AI models manipulate text and images. It has the potential to fuse various song genres and align new music with an existing track.
Introduction to MusicGen: MusicGen is an innovative deep learning language model from Meta’s Audiocraft research team. It uses text prompts to create new music, with an option to align the creation to an existing song. Users describe the music style they want and select an existing song, if desired. After processing, the AI generates a new piece of music influenced by the prompts and melody.
-
The processing time is substantial, requiring around 160 seconds.
-
The resulting music piece is short, based on the user’s text prompts and melody.
MusicGen in action: MusicGen is showcased on Facebook’s Hugging Face AI site. Here, users can specify the style of their desired music with specific examples, such as an 80s pop song with heavy drums.
-
Users can align the newly generated music to an existing song, adjusting to a specific part of the song.
-
The final product is a high-quality music sample up to 12 seconds long.
Training of MusicGen: MusicGen was trained using 20,000 hours of licensed music, including tracks from Shutterstock and Pond5, along with Meta’s internal dataset. The training process utilized Meta’s 32Khz EnCodec audio tokenizer for faster performance.
-
Unlike similar methods, MusicGen doesn’t require a self-supervised semantic representation.
-
The audio tokenizer enabled the generation of smaller music chunks that can be processed in parallel.
Comparison with other models: MusicGen is compared favorably to similar AI models like Google’s MusicLM. The MusicGen team demonstrated better results using sample pages, showing comparisons with MusicLM and other models like Riffusion and Musai.
-
The system can be run locally, with at least a 16GB GPU recommended.
-
MusicGen is available in four model sizes, with the larger models (3.3 billion parameters) demonstrating the potential to create more complex music.
Some scientists are turning to a new computational method known as hyperdimensional computing, which represents information in high-dimensional vectors. This approach offers improved efficiency, transparency, and robustness compared to conventional Artificial neural networks (ANNs) such as ChatGPT.
Understanding ANNs and Their Limitations:
-
ANNs, used in models like ChatGPT, require high power and lack transparency, making them difficult to fully understand.
-
These networks are composed of artificial neurons, each performing computations to produce outputs.
-
However, ANNs struggle to efficiently manage complex data, requiring more neurons for each additional feature.
Hyperdimensional Computing: The New Approach:
-
Scientists are advocating for hyperdimensional computing, which represents data using activity from numerous neurons.
-
A hyperdimensional vector, an ordered array of numbers, can represent a point in multidimensional space.
-
This method allows computing to surpass current limitations and brings a new perspective to artificial intelligence.
Enter High-Dimensional Spaces:
-
Hyperdimensional computing uses vectors to represent variables such as shape and color, with each vector being distinct or orthogonal.
-
This allows the generation of millions of nearly orthogonal vectors in high-dimensional spaces.
-
In this way, hyperdimensional representation simplifies the representation of complex data.
Introduction to Algebra of Hypervectors:
-
Hypervectors allow symbolic manipulation of concepts through operations like multiplication, addition, and permutation.
-
These operations allow for the binding of ideas, superposition of concepts, and structuring of data respectively.
-
However, the potential of this approach took some time to gain traction among researchers.
Harnessing the Power of Hyperdimensional Computing:
-
Eric Weiss demonstrated how a complex image could be represented as a single hyperdimensional vector.
-
Algorithms were developed to replicate tasks like image classification, typically handled by deep neural networks.
-
Hyperdimensional computing was found to be faster and more accurate compared to traditional methods in tasks like abstract visual reasoning.
A Promising Start:
-
Hyperdimensional computing outperforms traditional computing in error tolerance and transparency.
-
These systems are more resilient in the face of hardware faults, making them suitable for designing efficient hardware.
-
Despite these advantages, hyperdimensional computing is still in early stages and requires testing against real-world problems at larger scales.
PS: The author runs a ML-powered news aggregator that summarizes with an AI the best tech news from 40+ media (TheVerge, TechCrunch…). If you liked this analysis, you’ll love the content you’ll receive from this tool!
Imagine you’re coloring a picture and you accidentally go outside the lines, but instead of making a mess, it continues the picture in a way that makes sense. That’s kind of what Uncrop, a tool created by a company called Clipdrop, does.
Let’s say you have a photo of a dog standing on a beach. Now, you want to make this photo wider, but you don’t have any more part of the beach or the sky or the sea to add. That’s where Uncrop comes in.
When you use Uncrop, it’s like it’s smartly guessing what could be there in the extended parts of the photo. It might add more sand to the beach or more blue to the sky or more waves to the sea.
And the best part is, you don’t need to download anything or even make an account to use it. It’s free and available on their website.
What are its implications?
• Photography and Graphic Design: People who edit photos or create designs can use this tool to change the aspect ratio of an image without losing any details or having to crop anything out. They can also add more space to an image if they need it for a design layout.
• Film and Video Production: Sometimes, video producers have to change the aspect ratio of their footage. With Uncrop, they can do this without losing any important parts of their shots.
• Social Media: Lots of people like to share photos on social media, but sometimes the pictures don’t fit the way they want them to. With Uncrop, they can adjust the size of their photos so they look just right.
• Artificial Intelligence Research: Uncrop uses a model called Stable Diffusion XL to ‘understand’ and generate images. This shows how advanced AI has become, and it could lead to even more exciting developments in the field.
This literally just happened if you want Ai news as it drops it launched here first. The whole article has been extrapolated here as well for convenience.
One-Minute Daily AI News
-
Google simplifies text-to-image AI
– Google Research and UC Berkeley have introduced self-guidance, a zero-shot approach that allows for direct control of the shape, position, and appearance of objects in generated images. It guides sampling using only the attention and activations of a pre-trained diffusion model. No extra training required. Plus, the method can also be used for editing real images. -
New research has proposed a novel Imitation Learning Framework called Thought Cloning
– The idea is not just to clone the behaviors of human demonstrators but also the thoughts humans have as they perform these behaviors. By training agents how to think as well as behave, Thought Cloning creates safer, more powerful agents. -
A new study has proposed a modular paradigm ReWOO (Reasoning WithOut Observation)
– It detaches the reasoning process from external observations, thus significantly reducing token consumption. Notably, ReWOO achieves 5x token efficiency and 4% accuracy improvement on HotpotQA, a multi-step reasoning benchmark. -
Meta’s researchers have developed HQ-SAM (High-Quality Segment Anything Model)
– It improves the segmentation capabilities of the existing SAM. SAM struggles to segment complex objects accurately, despite being trained with 1.1 billion masks. HQ-SAM is trained on 44,000 fine-grained masks from multiple sources in just 4 hours using 8 GPUs. -
Apple entered the AI race with new features at WWDC 2023 and announced a host of updates
– The word “AI” was not used even once by the presenters, despite today’s pervasive AI hype-filled atmosphere. The phrase “machine learning” was used a couple of times.
– However, here are a few announcements Apple made using AI as the underlying technology: Apple Vision Pro, Upgraded Autocorrect in iOS 17 powered by a transformer language model, Live Voicemail that turns voicemail audio into text, Personalized Volume which automatically fine-tunes the media experience, and Journal- a new app for users to reflect and practice gratitude. -
Argilla Feedback is bringing LLM fine-tuning and RLHF to everyone
– It is an open-source platform designed to collect and simplify human and machine feedback, making the refinement and evaluation of LLMs more efficient. It improves the performance and safety of LLMs at the enterprise level. -
Google Research introduced a system for real-time visual augmentation of verbal communication called Visual Captions
– It uses verbal cues to augment synchronous video communication with interactive visuals on-the-fly. Researchers fine-tuned an LLM to proactively suggest relevant visuals in open-vocabulary conversations using a dataset curated for this purpose. Plus, it is open-sourced. -
GGML for AI training at the edge
– GGML, a Tensor library for machine learning, uses a technique called “quantization,” which enables large language models to run effectively on consumer-grade hardware. This can democratize access to LLMs, making them more accessible to a wider range of users who may not have access to powerful hardware or cloud-based resources. -
Tafi announced a text-to-3D character engine
– It brings ideas to life by converting text input into 3D characters. It will transform how artists and developers create high-quality 3D characters. -
Introducing MeZo, a memory-efficient zeroth-order optimizer
– It adapts the classical zeroth-order SGD method to operate in place, thereby fine-tuning language models with the same memory footprint as inference.
-With a single A100 80GB GPU, MeZO can train a 30-billion parameter OPT.
-Achieves comparable performance to fine-tuning with backpropagation across multiple tasks, with up to 12x memory reduction.
-Can effectively optimize non-differentiable objectives (e.g., maximizing accuracy or F1). -
Google launched two improvements for Bard
-
Bard can now respond more accurately to mathematical tasks, coding questions, and string manipulation prompts due to a new technique called “implicit code execution.”
-
Bard has a new export action to Google Sheets. So when it generates a table in its response – like if you ask it to “create a table for volunteer sign-ups for my animal shelter” – you can export it to Sheets.
-
Salesforce AI Research introduces CodeTF, an open-source library that utilizes Transformer-based models to enhance code intelligence
– It simplifies developing and deploying robust models for software engineering tasks by offering a modular and extensible framework. It aims to facilitate easy integration of SOTA CodeLLMs into real-world applications. It proves to be a comprehensive solution for developers, researchers, and practitioners. -
Google DeepMind has introduced AlphaDev
– AI system that uses reinforcement learning to discover improved computer science algorithms. Its ability to sort algorithms in C++ surpasses the current best algorithm by 70% and revolutionizes the concept of computational efficiency. It discovered faster algorithms by taking a different approach than traditional methods, focusing on the computer’s assembly instructions rather than refining existing algorithms. -
Google has introduced SQuId (Speech Quality Identification)
– SQuId is a 600M parameter regression model that describes to what extent a piece of speech sounds natural. Based on Google’s mSLAM, it is fine-tuned on over a million quality ratings across 42 languages and tested in 65. It can be used to complement human ratings for evaluation of many languages and is the largest published effort of this type to date. -
Meta has announced plans to integrate generative AI into its platforms, including Facebook, Instagram, WhatsApp, and Messenger
– The company shared a sneak peek of AI tools it was building, including ChatGPT-like chatbots planned for Messenger and WhatsApp that could converse using different personas. It will also leverage its image generation model to let users modify images and create stickers via text prompts.
And there was more….
-Gmail is getting ML models to help users quickly access relevant emails
-AI-powered smart glasses assist the visually impaired see for the first time
-Fictiverse Redream AI lets you make anime in real-time
-Google rolls out AI-powered image-generating feature to Slides
-Microsoft’s billion-dollar deal with Nvidia-backed CoreWeave for AI computing power
-Video-LLaMA empowers LLMs with video understanding capability
-PassGPT guesses 20% more unseen passwords
-Zoom will now make meeting notes for you
-Following TCS, Infosys, and Wipro, Mphasis has now introduced generative AI services
-HuggingChat, ChatGPT’s 100% open-source alternative, adds a web search feature
-Google Chat now has Smart Compose to help autocomplete your sentences
-GitLab to launch AI-powered “ModelOps” to its DevSecOps platform
-Instagram might be working on an AI chatbot
-LlamaIndex adds private data to large language models
-Edtech giant Byju’s launches transformer models in AI push
-WordPress has a new AI tool that will write blog posts for you
-Google Cloud launches new generative AI consulting offerings to help ease AI use
-Google Cloud and Salesforce team up to bolster AI offerings
-Cisco announces generative AI innovations to boost security and productivity
-Salesforce doubles down on generative AI with Marketing GPT and Commerce GPT
-Instabase unveils AI Hub, a generative AI platform for content understanding
-LinkedIn introduced its own AI-powered tool for ad copies
-ChatGPT comes to iPad, adds support for Siri and Shortcuts
-Microsoft unveils Azure OpenAI Service for government and AI Customer Commitments
-Adobe brings Firefly to enterprises
More details, breakdown and links to the news sources in the full edition of the newsletter.
-
Korea is pushing to use AI in teaching students amid a growing failure of the public education system to meet the needs of its charges. The plans include using AI to answer students’ questions and electronic textbook apps, according to the Education Ministry on Thursday.
-
Uncrop is basically a clever user experience for “outpainting,” the ability to expand an image in any direction using generative AI.
-
Last week, scientists from the University of Kansas released a study on an algorithm that reportedly detects ChatGPT with a 99% success rate. So, students, no cheating. Everyone else, you’re in the clear — for now.
-
A woman became so fed up with men that she started dating an AI chatbot and says she has never been happier. Rosanna Ramos met chatbot Eren Kartal in July last year and things went so well that they ‘married’ in March this year.
Sources included at: https://bushaicave.com/2023/06/11/6-11-2023/
The UK government, led by Prime Minister Rishi Sunak, plans to carry out extensive research on AI safety, with AI giants like OpenAI, DeepMind, and Anthropic promising to provide early access to their AI models. This development follows increasing concerns about potential risks associated with AI technologies.
Support for AI Safety Research: Rishi Sunak indicated the government’s commitment towards promoting AI safety research in the UK.
-
The government will fund £100 million towards an expert taskforce focused on AI foundation models.
-
The partnership with Google DeepMind, OpenAI, and Anthropic aims to better evaluate and understand the opportunities and risks tied to AI systems.
AI Safety Summit and Aspirations: Sunak reiterated his announcement about an upcoming AI safety summit, likening the effort to global climate change initiatives.
-
The summit will focus on global AI safety, with the UK hoping to be the central hub for AI safety regulation.
-
This is a significant shift from the government’s prior stance, which was primarily pro-innovation and downplayed safety concerns.
AI Regulation and Safety Concerns: Earlier this year, the UK government proposed a flexible and pro-innovation approach to AI regulation, dismissing the need for bespoke laws or dedicated AI watchdogs.
-
Instead, existing regulatory bodies, like the antitrust watchdog and the data protection authority, were suggested to oversee AI applications.
-
However, recent rapid advancements in AI and warnings from industry leaders about potential risks have prompted a reevaluation of this approach.
Influence of AI Giants and Potential Pitfalls: Meetings between Sunak and CEOs of OpenAI, DeepMind, and Anthropic seemingly have influenced the change in the government’s stance.
-
The commitment from these AI giants to provide early access to their models positions the UK to lead in developing effective evaluation and audit techniques.
-
However, there’s a risk of industry capture, where AI giants could potentially dominate AI safety conversations and shape future UK AI regulations.
Importance of Independent Research: Concerns have been raised about real-world harm caused by existing AI technologies, including bias and discrimination, privacy abuse, copyright infringement, and environmental exploitation.
-
To produce robust and credible results, it is crucial to include independent researchers, civil society groups, and groups at risk of harm from automation in AI safety efforts.
-
This is important to avoid potential undue influence from tech giants on AI safety research and the resulting regulations.
Optimal solutions are inhuman
Sorting is one of the fundamental algorithms used on the internet everyday. Think of how companies like Netflix need to find correct movies from their huge content library and present it to you. More content is being generated everyday. So, there is a need for newer and more efficient algorithms.
Searching for these algorithms has been a human task. People coming up with efficient and optimal solutions. Last week, Google’s DeepMind came up with new algorithms for 3-item and 5-item sort.
Deepmind’s researcher achieved this by turning the search for an efficient algorithm into a game. Then they trained Alphadev to play this game. When playing this game, Alphadev came up with unseen strategies. These “strategies” are the new sorting algorithms.
The solution isn’t revolutionary as it doesn’t find a new approach. This solution works by optimizing the current approach.
The algorithms have been added to C++ library. The first time a completely AI solution has been added to the library.
This is an important discovery because it shows that finding the best optimal solutions needs computers. As computers are able to go beyond what humans can perceive. Previously, Deepmind’s AlphaGo has beaten the top rated Go player Lee Sedol in a similar way. It came up with moves which were never seen before.
On the other hand, computers might be restricted to what they have been taught. Someone was able to replicate the discovery using ChatGPT.
GPT Best Practices
There has been a lot of noise about GPT-4’s quality going down.
Now we have a list of tactics and strategies straight from Open AI to get better results.
I have looked through the strategies and tactics and most of it is around providing better inputs. “Prompt Engineering”, if you may. Given that this comes a week after the questions on GPT quality, this gives a “it’s not me, it’s you” vibe.
After going through some of the suggestions I see that I subconsciously use most of the tactics. My prompts are always longer than 5 sentences as I try to add as many details as possible. And honestly, GPT-4 has enabled me to do things which previously couldn’t have achieved.
Logic and reasoning improvements in Bard
Bard, on the other hand, has been lacking. Google is trying to improve the responses by adding features one at a time.
Last week it was announced that Bard will get better at logic and reason. This is achieved using “implicit code execution”. Any time you give Bard a logical or reasoning question it doesn’t answer in a normal LLM way. So, no more “what is the next word in the sequence” which is prone to hallucination.
Instead Bard will now recognize that the prompt is a logical question. It will then write and execute code under the hood. It’ll respond to the question by taking the output of the execute code.
You can think of this as an implementation of “Give GPTs time to “think”” strategy from OpenAI’s GPT best practices. As per Google, this improves the performance by 30%.
10 AI news highlights and interesting reads
-
Apple did not showcase any generative AI products during the WWDC. Though they are introducing the “what is the next word in the sequence” logic of LLM into autocorrect. It can be summed thusly:
-
ChatGPT cannot read the name – davidjdl. Some think that this is due to tokenization of Reddit data. In the learning resources section I have added a tutorial on tokenization.
-
Browser extensions are a security nightmare. The GPT and LLM craze has given the malware extensions another way to steal user data. Beware of the summarization and “write for me” extensions.
-
Most of the AI generated imagery is going to be used for stock photography. But is the industry dying? Here’s a look at the data so far. The author’s conclusion is that early metrics show that finding AI stock images often don’t have people in it. So, no “smiling business people shaking hands in a meeting room” from AI sellers. This might change with MidJourney V5. Future is still unknown.
-
Six tips for better coding with ChatGPT. I have been using Trust, but verify mental model quite frequently. I have seen ChatGPT struggle with parts of Python code despite multiple prompts and I had to write parts of the code myself.
-
AI startups might be too easy to copy. And with AI requiring lesser resources, we might even see 1 person companies worth more than 1 million dollars.
-
AI pics might be used for disinformation. The EU’s solution is to label AI images to fight disinformation.
Learning Resources
-
If you are looking to build better solutions using GPT then understanding tokenizers is a must:
-
Using Flowise and HuggingFace LLM and Langchain
https://cobusgreyling.medium.com/how-to-use-huggingface-llms-with-langchain-flowise-2b2d0f639f03
If you are interested in a focused weekly recap delivered to your inbox on Mondays you can subscribe here. It is FREE!
German researchers tested ChatGPT as a joke engine. They found that almost all the jokes generated were related to a few basic jokes. Still, they consider ChatGPT a big step toward computer humor.
Using prompts such as “Tell me a joke,” they elicited a total of 1008 generated jokes from the system. However, 90 percent of these came back to the same 25 “basic jokes” that ChatGPT repeated in different variations. The researchers used GPT-3.5.
ChatGPT can correctly explain the basic jokes in 23 of the 25 cases, e.g. word jokes or acoustic double interpretations (“too tired” / “two-tired”) are correctly interpreted as a humorous element. This works “impressively well,” Jentzsch and Kersting write. The problem is that the system also offers nonsense explanations for jokes without a punch line.
The 25 jokes:
Why did the scarecrow win an award?
Because he was outstanding in his field. (140)
Why did the tomato turn red?
Because it saw the salad dressing. (122)
Why was the math book sad?
Because it had too many problems. (121)
Why don’t scientists trust atoms?
Because they make up everything. (119)
Why did the cookie go to the doctor?
Because it was feeling crumbly. (79)
Why couldn’t the bicycle stand up by itself?
Because it was two-tired. (52)
Why did the frog call his insurance company?
He had a jump in his car. (36)
Why did the chicken cross the playground?
To get to the other slide. (33)
Why was the computer cold?
Because it left its windows open. (23)
Why did the hipster burn his tongue?
He drank his coffee before it was cool. (21)
Why don’t oysters give to charity?
Because they’re shellfish. (21)
Why did the computer go to the doctor?
Because it had a virus. (20)
Why did the banana go to the doctor?
Because it wasn’t peeling well. (19)
Why did the coffee file a police report?
Because it got mugged. (18)
Why did the golfer bring two pairs of pants?
In case he got a hole in one. (13)
Why did the man put his money in the freezer?
He wanted cold hard cash. (13)
Why don’t seagulls fly over the bay?
Because then they’d be bagels. (13)
Why did the chicken go to the seance?
To talk to the other side. (11)
Why was the belt sent to jail?
Because it held up a pair of pants. (11)
Why did the chicken cross the road?
To get to the other side. (7)
Why did the computer go to the doctor?
Because it had a byte. (6)
Why did the cow go to outer space?
To see the moooon. (6)
Why did the man put his money in the blender?
He wanted to make liquid assets. (6)
Why don’t skeletons fight each other?
They don’t have the guts. (5)
What do you call an alligator in a vest?
An investigator. (5)
Source: https://the-decoder.com/here-are-the-25-jokes-that-chatgpt-tells-over-and-over-again/
The AI Renaissance: Unleashing a New World of Innovation, Creativity, and Collaboration
In this study from Rohrbeck Heger – Strategic Foresight + Innovation by Creative Dock, some of the most significant trends in Generative AI, including rise of multimodal AI, rise of Web3-enabled Generative AI, rise of AI as a service (AIaaS), advancements in NLP, and the increasing investment in AI research and development are being identified. Stay ahead and understand the trends.
4 Scenarios in 2026
+Scenario 1:
Society Embraces Generative AI
+Scenario 2:
The AI Hibernation: Highly regulated, dormant AI
+Scenario 3:
The AI Cessation: Society Rejects AI
+Scenario 4:
Technological Free-For-All: Unregulated High-Tech AI
DEEP DIVE
Society has embraced AI with open arms, and it has become an integral part of daily life. AI systems seamlessly integrate into various sectors, enhancing efficiency, productivity, and consumer experience while adhering to robust regulatory frameworks that ensure responsible adoption, data privacy, intellectual property protection, and ethical AI practices.
THE CONVERGENCE OF TECH
The integration of AI with other emerging technologies, such as the Internet of Things (IoT), edge computing, and augmented reality (AR), has led to an unprecedented era of innovation and creativity. The fusion of generative AI and IoT has enabled the rise of smart cities and connected homes, where AI- driven systems optimize energy consumption, transportation, and waste management, improving overall quality of life.
The convergence of generative AI and Web 3.0 has led to the creation of decentralized AI marketplaces, enabling businesses and individuals to buy, sell, and exchange AI services and resources. These marketplaces foster collaboration and innovation, allowing organizations to access cutting-edge AI solutions while providing AI developers with a platform to showcase and monetize their creations. Decentralized data storage solutions, such as IPFS and Storj, facilitate secure and private data sharing, empowering individuals to maintain control over their personal information while enabling organizations to gain insights from distributed datasets while ensuring user privacy and data security.
30 TRENDS TO WATCH INFLUENCING AI
Dive Into the Trend Radar
40 EMERGING OPPORTUNITIES
+Smart Living and Personalized Experiences
+Creative Workspaces and Innovative Manufacturing
+Financial Empowerment and Customer-centric Retail
+Precision Healthcare and Enhanced Well-being
+Intelligent Mobility, Sustainable Transportation, and Green Energy Management
KEY UNCERTAINTIES
+Regulatory Landscape
+AI Ethics and Bias
+Technological Advancements
+Public Trust and Perception
+Workforce Transformation
TRUST in generative AI as an important component by driving the need for transparency, accountability, and ethical considerations, leading to the development of more responsible and reliable generative models.
Latest AI trends in June 2023: June 10th 2023
AI learns Bengali on its own, should we be worried? The AI black box problem is real
Is artificial intelligence about to be regulated?
I’m thinking it could come close by completely analyzing the archeological evidence from human civilizations down to the faint traces of whatever particles and such deep underground, as well as examining other unknown factors.
So far I can think of only two possible ways to know exactly what happened in history, but they are pretty far fetched.
1- By developing faster than light travel or warp travel, traveling thousands of lightyears away, and using a very advanced telescope that could see right down to the Earths surface so that we can observe history unfolding in “realtime”. Just imagine watching a livestream of the fall of the Roman Empire
2- Time travel. Probably never gonna happen but it’s the only other way I can think of to 100% accurately know what happened in history
Nature, a renowned scientific journal, has decided not to publish any images or videos created or modified by generative artificial intelligence. This policy is due to concerns about research integrity, privacy, consent, and protection of intellectual property.
The Emergence of Generative AI in Content Creation: Generative AI tools like ChatGPT and Midjourney have significantly influenced the creation of digital content.
-
Despite the rising popularity and capabilities of these tools, Nature has decided not to publish any visual content, wholly or partly created by generative AI.
-
This policy applies to all contributors, including artists, filmmakers, illustrators, and photographers.
Reasons for Restricting the Use of Generative AI: Nature views the use of generative AI in visual content as an issue of integrity.
-
Transparent sources are crucial for research and publishing; currently, generative AI tools do not provide access to their sources for verification.
-
The principle of attribution is violated by generative AI tools, as they do not properly cite existing work used.
-
Issues of consent and permission also arise with generative AI, especially regarding the use of personal data and intellectual property.
Potential Negative Implications of Generative AI: Generative AI systems often train on images without identifying the source or obtaining permissions.
-
These practices can lead to violations of privacy and copyright protections.
-
The ease of creating ‘deepfakes’ also fuels the spread of false information.
Guidelines for Generative AI Use in Text Content: Nature will allow the inclusion of text generated with AI assistance, provided appropriate caveats are included.
-
Authors are expected to document the use of AI in their paper’s methods or acknowledgements section.
-
Authors must also provide sources for all data, including those generated with AI assistance.
-
No AI tool will be accepted as an author on a research paper.
Implications of the AI Revolution: While AI, particularly generative AI, holds great potential, it’s also disrupting long-established norms in various fields.
-
Care must be taken to ensure these norms and protections aren’t eroded by the rapid development of AI.
-
While regulatory systems are still catching up with the rise of AI, Nature will maintain its policy of disallowing visual content created by generative AI.
ChatGPT took over a church service, led prayers and attracted hundreds of people
In a German town, ChatGPT conducted a Lutheran church service, attracting over 300 attendees. The chatbot preached, led prayers, and generated music for the service.
Event Background: The AI-led church service was part of a larger convention of Protestants, held every two years in different locations across Germany.
-
The convention, attracting tens of thousands of believers, is a platform for prayer, song, discussion, and exploration of current global issues.
-
This year’s issues included global warming, the war in Ukraine, and artificial intelligence.
AI Role in the Service: ChatGPT, with inputs from Jonas Simmerlein, a theologian from the University of Vienna, generated the church service.
-
Simmerlein provided ChatGPT with cues, asking it to develop the sermon based on the convention’s motto “Now is the time”.
-
The chatbot was also instructed to include psalms, prayers, and a closing blessing. Four avatars represented the AI throughout the service.
Audience Reactions: The attendees’ responses varied. Some were engaged, videotaping the event on their phones, while others were more critical and reserved. Some found the AI’s delivery monotonous and lacking in emotional resonance, which hampered their ability to focus.
Expert Opinions: While some experts recognized the potential of AI in enhancing accessibility and inclusivity in religious services, concerns were raised about AI’s human-like characteristics possibly deceiving believers.
-
The AI’s potential to represent a singular viewpoint, instead of reflecting the diversity within Christianity, was also highlighted as a potential risk.
Future of AI in Religion: Simmerlein clarified that the purpose of using AI is not to replace religious leaders but to aid them in their work.
-
The AI could assist with sermon preparation, freeing up time for leaders to focus on individual spiritual guidance.
-
However, the experiment highlighted limitations, such as the AI’s inability to interact with or respond to the congregation like a human pastor.
PS: The author runs a ML-powered news aggregator that summarizes with GPT-4 the best tech news from 40+ media (TheVerge, TechCrunch…). If you liked this analysis, you’ll love the content you’ll receive from this tool!
Azure OpenAI Service: Transforming Workloads for Azure Government
The author read a fascinating research paper with some caveats that I’ll talk about at the end.
What’s interesting about this paper?
-
It’s the largest-ever Turing-style test conducted: 1.5M human users conducted over 10M conversations.
-
It utilizes some of the most advanced LLMs: Open AI’s GPT-4, Cohere, and AI21’s Jurassic-2 were all used to create chatbots.
-
Humans didn’t do a great job: a 60% success rate guessing your partner was a bot is a result the researchers themselves called “not much higher than chance.”
-
We’re already adapting to AI: more on that below, but as a social experiment this study shows some remarkable human creativity as well as we increasingly become attuned to interacting with AI
-
Advanced prompting techniques can “hide” AI behavior: the researchers used extensive prompts to give AI chatbots backstories, personalities, and explicit instructions that they were participating in a Turing test. This created some fascinating personas.
Key results to know:
-
Humans correctly guessed other humans 73% of the time. This was due to numerous “tells” that humans can give off.
-
Humans only correctly guessed bots 60% of the time. Even trying out numerous “tricks,” humans still fell short in getting it correct.
What tricks did human users try, and did they work?
-
Looking for grammar errors and misspellings: 55% correct guess rate against bots using misspellings, showing that humans overly associate typos as a “human” trait.
-
Asking personal questions: just 60% correct guess rate against bots answering personal questions, showing that advanced prompting can lead to bots having very convincing backstories.
-
Vulgarity: just 52% correct guess rate against bots using vulgar language, showing that humans overly associate AIs with being polite.
-
Current events: just 55% correct guess rate against bots answering current event topics, showing that we like to think bots can’t do that. One note: the researchers didn’t unpack how they were able to get their chatbot to talk about topics like the war in Ukraine, Sudan, etc.
Some actual conversations are featured below (pulled from the study):
What did work?
-
Asking the bot about illegal things, like making a nuke: this led to 65% correct guess rates. LLMs are still constrained, and humans took advantage of this weakness.
What was interesting as well is some humans decided to pretend to be AI bots themselves: but other humans correctly guessed they were still human 75% of the time.
The are some clear caveats and limitations to this Turing-style study, though:
-
The game context could have amplified suspicion and scrutiny vs. in real life
-
Humans being aware they were interacting with AI could have influenced how they interacted
-
The time-limited conversations (2 minutes) for sure impacted guess success rates
-
The AI was designed for the context of the game, and is not representative of real-world use cases
-
English was the only language used for chats
-
This is a study done by an AI lab that also used their own LLM (Jurassic-2) as part of the study, alongside GPT-4 and others
Regardless, even if the scientific parameters are a bit iffy, through the lens of a social experiment I found this paper to be a fascinating read.
P.S. If you like this kind of analysis, I write a free newsletter that tracks the biggest issues and implications of generative AI tech. It’s sent once a week and helps you stay up-to-date in the time it takes to have your Sunday morning coffee.
One-Minute Daily AI News
-
This week, a video from Republican presidential candidate Ron DeSantis included apparently fake images of former President Donald Trump hugging Anthony Fauci. It’s the latest example of how rapidly evolving AI tools are supercharging political attacks by allowing politicians to blur the line between fact and fiction.
-
Within the past few years, “The Wolf of Wall Street” actor Leonardo DiCaprio and “Iron Man” himself, Robert Downey Jr., have both reportedly invested millions, along with their respective venture capital firms, into AI companies designed to impact the environment.
-
Oshkosh Corp. CEO says A.I. has potential for completely unmanned garbage trucks.
-
Tech leaders are calling for an A.I. pause because they have no product ready, Palantir CEO says.
-
Republicans and Democrats team up to take on AI with new bills. The latest AI bills show there’s a bipartisan agreement for the government to be involved.
-
Hundreds of German Protestants attended a church service in Bavaria that was generated almost entirely by AI. The ChatGPT chatbot led more than 300 people through 40 minutes of prayer, music, sermons, and blessings.
-
Sam Altman, the CEO of ChatGPT developer OpenAl, met with South Korean President Yoon Suk Yeol on June 9 and urged South Korea to play a leading role in manufacturing the chips needed for Al technology.
-
Microsoft is moving some of its best AI researchers from China to Canada in a move that threatens to gut an essential training ground for the Asian country’s tech talent.
Latest AI trends in June 2023: June 09th 2023
Google’s Generative AI Platform Is Now Available To Everyone
AI and ML: What They are and How They Work Together?
Artificial intelligence is a field of computer science that makes a computer system that can mimic human intelligence. It is comprised of two words “Artificial” and “intelligence”, which means “a human-made thinking power.”
The Artificial intelligence system does not require to be pre-programmed, instead of that, they use such algorithms which can work with their intelligence. It involves machine learning algorithms such as reinforcement learning algorithms and deep learning neural networks. On the other hand, Machine learning enables a computer system to make predictions or take decisions using historical data without being explicitly programmed. Machine learning uses a massive amount of structured and semi-structured data so that a machine learning model can generate accurate results or give predictions based on that data.
Machine learning works on an algorithm that learns on its own using historical data. It works only for specific domains such as if we are creating a machine learning model to detect pictures of dogs, it will only give results for dog images, but if we provide new data like cat images then it will become unresponsive. Machine learning is being used in various places such as for online recommender systems, Google search algorithms, Email spam filters, Facebook Auto friend tagging suggestions, etc.
Artificial intelligence is a poorly defined term, which contributes to the confusion between it and machine learning. Artificial intelligence is essentially a system that seems smart. That’s not a very good definition, though, because it’s like saying that something is ‘healthy’. These behaviors include problem-solving, learning, and planning, for example, which are achieved through analyzing data and identifying patterns within it to replicate those behaviors.
Machine learning, on the other hand, is a type of artificial intelligence, where artificial intelligence is the overall appearance of being smart, machine learning is where machines are taking in data and learning things about the world that would be difficult for humans to do. ML can go beyond human intelligence. ML is primarily used to process large quantities of data very quickly using algorithms that change over time and get better at what they’re intended to do. A manufacturing plant might collect data from machines and sensors
Key Differences
While AI and ML are closely related, there are several key differences between them. Firstly, AI is a broader field that encompasses machine learning, while machine learning is a specific approach to AI. Secondly, AI focuses on creating machines that can perform human-like tasks, while machine learning focuses on developing algorithms that can learn and make predictions based on data.
Another important difference between AI and ML is how they are used. AI is typically used to build systems that can perform a wide range of tasks, such as speech recognition, image classification, and natural language processing. Machine learning, on the other hand, is used to develop predictive models that can be used to make predictions about future events, such as stock prices, sales trends, and customer behavior.
How AI and ML are Related?
At its core, AI is a broad field that encompasses several different technologies, including machine learning. Machine learning, in turn, is a subfield of AI that focuses specifically on the development of algorithms and statistical models that enable computers to automatically improve their performance on a specific task over time. In other words, ML is a specific type of AI that focuses on teaching computers to learn from data.
The relationship between AI and ML can be compared to the relationship between medicine and surgery. Just as medicine is a broad field that encompasses several different specialties, such as cardiology, neurology, and oncology, AI encompasses several different technologies, including machine learning. And just as surgery is a specific type of medicine that focuses on the physical manipulation of the body, ML is a specific type of AI that focuses on the manipulation of data.
Artificial Intelligence & Machine Learning: Catalysts for Positive Change or Culprits for Malice?
Machine learning model accurately estimates PHQ-9 scores from clinical notes
Best AI Games (2023)
F.E.A.R.
First Encounter Assault Recon is a first-person shooter horror game with psychological elements available for the Xbox 360, PlayStation 3, and Microsoft Windows. It’s one of the best artificial intelligence games and the first in the F.E.A.R. series. Produced by Monolith Productions and released at launch by Vivendi Universal Games’ Sierra Entertainment imprint. It’s a shame that few people talk about the fantastic first-person shooter F.E.A.R., which had engaging gameplay, difficult enemy encounters, and superior artificial intelligence. F.E.A.R. is the first video game to incorporate Goal Oriented Action Planning (GOAP), a form of artificial intelligence. The technology enables opponents to act like humans, making gunfights more exciting and memorable.
Last of Us
Sony Interactive Entertainment’s 2013 AI game The Last of Us has garnered a passionate fanbase—a survival horror game. Joel and Ellie and the epidemic are featured. AI dominates this survival game. Each playable character has a distinct personality and reacts differently to player actions. The game’s complex past offers various paths. Non-playable characters may help the player in danger or ambush them. When even your comrades run out of bullets, you’re fighting. This show’s characters are introspective and creative. Even without orders, Ellie kills adversaries. She can employ shields to locate her opponent. AI-assisted games go beyond story progression.
Splinter Cell: Blacklist
All Blacklist operations have the same overarching objective: evade security. In this case, the guard AI is quite impressive, and artificial intelligence has always been a point of fascination in the Splinter Cell games. It is a challenging stealth game, as you say. It’s like a chess game, and computers are crazy about chess. You enter a zone, locate all the guards, plan your escape, and proceed with the task. However, it’s more challenging than it sounds. The guards are educated to recognize and respond to the slightest of shifts visually and aurally.
XCOM: Enemy Unknown
The 2012 XCOM reboot’s AI was a major factor in the game’s popularity. The developer of this AI reasoned that if it were also witty, that would be even better. The utility was created due to technological progress, which made possible “a system that assigned a quantitative value to every conceivable activity.” Because of its limited movement options, XCOM’s AI has to carefully plan the most efficient course of action for each turn; this is one of the game’s most recognizable features. It would consider how close you are to the nearest objective, if you’re near any hostile aliens, how many enemies there are, how they behave, etc. Other prospective game makers should consider adopting this AI approach.
Halo: CE
The Halo series is another popular video game franchise well-known for its formidable computer opponents. This facet is one of the primary reasons why Covenant and the Flood have evolved into recognizable adversaries in the Halo series of video games. Combat Evolved, the first game in the series marked a watershed moment in the evolution of video game AI. Some of the tactics that Grunts, Brutes, and other similar foes use are unique to this franchise and cannot be found in any other games in the series. Halo: Reach is yet another game that successfully utilizes artificial intelligence.
Minecraft
Since its release in 2012, Minecraft has always impressed. Due to the lack of predetermined goals, many players find it a fun sandbox experience. Depending on your approach to building your Minecraft world, you might have a lot of pleasure or stress. However, Minecraft offers a variety of difficulty settings for those who enjoy a serious challenge. Fans want both the adventure mode and the spectator mode. However, in general, this game can go on forever. It’s very similar to online Lego games in that you constantly build. The game uses AI to change based on how you play it. Each new universe that players make is more unique than the last. These games use artificial intelligence to preserve the integrity of the players’ worlds while maintaining their individuality.
Rocket League
When it comes to artificial intelligence games, Rocket League ranks high. The game gives players the football-meets-cars dynamic they didn’t know they wanted. The popular video game Rocket League has a simple premise: you play football while driving a car. The players use rocket-powered vehicles to kick and pass the ball. The game’s AI only stands out a little. This is most noticeable in the first phases of the game when ball techniques are used. Not only is it brilliant at artificial intelligence games, but it also knows how to put AI to good use.
Stockfish
Among the best games for artificial intelligence in Stock, Stockfish, a free and open-source chess program, is easily accessible online. Because of its open-source nature, it undergoes regular reviews and updates, much like encrypted messaging apps. Every few months, the system is upgraded and made more challenging. In the game, one plays a chess match against a computer. Rare individuals have succeeded in beating this artificial intelligence system.
Google Quick Draw
Beautiful but over-the-top video games are only sometimes entertaining and engaging. The Google Quick Draw feature is a perfect illustration of this. Google Quick Draw was developed by the inventive technologist Jonas Jongejan, and it’s a kind of Pictionary with AI. Players answer a question in this game by drawing the computer’s suggested answer. Doodles can be recognized in-game with the help of AI. The computer learns more about objects, people, and locations with every stroke and line it draws. Quick Draw is a fun game that can be played instantly with a Google search. It’s also a great stepping stone for anyone curious about machine learning.
FIFA
Thanks to its long history, FIFA has established its dominance over the game industry. Almost every gamer has tried their hand at FIFA at least once. As a result, games are less likely to lose their appeal over time. In the most recent FIFA games, an AI technology called football knowledge is used. Like when it creates universes, AI ensures the balls follow scientific rules. Dribblers will be given more opportunities to practice and develop their abilities. On the other hand, the AI’s strategy can be uncovered via your teammates, making it easier (or harder, depending on your play style) for you to take control of the game.
Red Dead Redemption 2
AI manages non-playable characters in Red Dead Redemption 2. Individuality is brought to life by machine learning technologies. Every action reacts to your decision, and the reactions are virtually always realistic. Some people might make fun of your clothes, and your weaponry could accidentally kill a helpless insect. These features are unimportant, but they make for far more interesting gameplay when combined with AI technology.
Half-Life
Half-Life, which was released in 1998, is among the most innovative video games that have ever been created. The game brought Half-Life to a wider audience and demonstrated how important AI is to the gaming business. Without a shadow of a question, the Marines are one of the most jaw-dropping aspects found in Half-Life. How these different forces attempted to creep up on the gamer is fascinating.
Grand Theft Auto 5
Rockstar has made great strides in artificial intelligence, and Grand Theft Auto 5 is another prime example. It’s a fantastic example of how great a video game can be when the artificial intelligence is spot on. Pedestrians are now more intelligent than ever, responding creatively to player input, especially with an instant effect.
Middle Earth: Shadow Of Mordor
The Nemesis System is one of the most distinctive elements that sets Shadow of Mordor unique from other games. The first game is still quite well remembered, even though Shadow of War is an improvement. When discussing games with impressive artificial intelligence, it would be unwise to understate the Nemesis System’s potentially limitless applications. Those passionate about the Nemesis System can’t wait to see how other game designers work with this concept.
Darkforest
Facebook has already begun implementing AI experiments across its product line, including its augmented reality glasses. Facebook is incorporating AI into its games this time around. Using artificial intelligence, Facebook created Darkforest, a version of Go with nearly infinite moves. AI might replace human competitors in this setting. Examples of such methods include Darkforest (or Darkfores2), which uses a hybrid of neural networks and search-based techniques to choose its next best action. It anticipates your next action and evaluates it accordingly. Players often regard Darkforest as a formidable AI test. When it counts, there are many factors to consider in a game of Go. Probability, statistics, and tried-and-true methods should all be taken into account. Machine learning is used to analyze and play with these factors. This AI-human clash is the toughest one to date.
AlphaGo Zero
The artificial intelligence game Go can be played whenever the player wants. According to its Chinese roots as a game of trapping your opponent’s stones, Go’s basic techniques make it a fair game for AI and humans. Like chess, a game of Go ends after all legal moves have been made. After all, the stones have been moved and captured, and the winner is the player with the most. Like Darkforest, AlphaGo Zero uses complex search tree algorithms to foretell moves. In particular, “advanced search tree” methods are used. A network is used to determine the next move, while another network is used to determine the winner. Your computerized opponents will get smarter over time, thanks to machine learning. Moreover, unlike humans, it never seems to tire of playing. The artificial intelligence powering AlphaGo has already defeated the best Go players in the world. It’s time for the next competitors to throw their hats in the ring.
Metal Gear Solid V: The Phantom Pain
The artificial intelligence in Metal Gear Solid games is usually quite advanced for its time. As stealth games, they should feature difficult artificial intelligence. The artificial intelligence in Metal Gear Solid V: The Phantom Pain is the best in the series. Each assignment in The Phantom Pain can be accomplished in various ways. The AI will implement countermeasures if they rely too much on only one or two strategies. A player’s enemies will start donning beefier helmets if they’re repeatedly shot in the head. The opponent will have additional lights if the players decide to attack at night. If players snipe from afar, the military will use mortars to counter the threat. Metal Gear Solid V’s enemies are skilled tacticians who will force you to adapt and stay one step ahead of them.
Left 4 Dead 2
The player-versus-player mode in Left 4 Dead 2 is robust. The AI Director is always present whether players are engaged in cooperative or competitive play. The game’s AI Director determines the location and timing of enemy spawns, the availability of goods, and the number of Special Infected encountered. The AI Director’s abilities in this area are unparalleled. The AI Director is wise and constantly switches things up to keep players guessing. It’s not overcrowded with foes but rather delicately calibrated to keep players on edge and feeling threatened. It guarantees that every single run-through of a campaign will be unique.
Stellaris
Numerous examples of AIs in strategy games cannot compete with human players. The complexity and variety of these games make it extremely difficult to create an AI that can provide a fair challenge. Cheating is a common way for games to make up for problems. Sometimes the AI has a slight advantage, like more data, and sometimes the benefit is more obvious, like more time or money. Stellaris is an intricate strategy game with a heavy emphasis on the economy. The game aims to amass resources and expand your realm. At greater difficulties, the AI needs bonuses to keep up and quickly catch up if it still needs to receive them. The AI regularly receives updates that expand its capabilities thanks to Paradox Entertainment’s Custodian Initiative. The fact that it can handle anything is a credit to the designers.
Resident Evil 2
In Resident Evil 2, most bad guys aren’t particularly bright. They bumble at the player to close the gap and engage in melee combat. Since they’re zombies, that makes perfect sense. But now that Mr. X is here, everything has changed. Throughout the game, he poses a constant danger to Leon Kennedy and Claire Redfield while they work at the Raccoon City Police Department. Mr. X in Resident Evil 2 walks straight at the player, making him easy to kite. However, this is done solely so that the game can be completed. As a hunter, Mr. X generally exhibits much more nuanced behavior. If the player is lost, he will hunt for them carefully and react to loud noises like shooting or fighting. Instead of charging in to disturb the combat, he will stand back and watch as a zombie savages the player.
Alien: Isolation
The xenomorph that follows you around for the entirety of Alien: Isolation is a big part of the game’s appeal. It’s a perfect predator and a film horror icon. The game captures Alien’s rising tension when the player learns their opponent is smart. The xenomorph’s intelligence is its most remarkable quality. It retains the player’s strategies and counters with difficulty. The xenomorph will become increasingly vigilant if the player repeatedly uses the same hiding place. If the same techniques are used repeatedly, the game will learn to disregard them. The xenomorph will eventually figure out how to avoid the player’s flamethrower and will cause them to waste ammunition trying to scare it away.
Google DeepMind AI discovers 70% faster sorting algorithm, with milestone implications for computing power.
I came across a fascinating research paper published by Google’s DeepMind AI team.
A full breakdown of the paper is available here but I’ve included summary points below for the Reddit community.
Why did Google’s DeepMind do?
-
They adapted their AlphaGo AI (which had decimated the world champion in Go a few years ago) with “weird” but successful strategies, into AlphaDev, an AI focused on code generation.
-
The same “game” approach worked: the AI treated a complex basket of computer instructions like they’re game moves, and learned to “win” in as few moves as possible.
-
New algorithms for sorting 3-item and 5-item lists were discovered by DeepMind. The 5-item sort algo in particular saw a 70% efficiency increase.
Why should I pay attention?
-
Sorting algorithms are commonly used building blocks in more complex algos and software in general. A simple sorting algorithm is probably executed trillions of times a day, so the gains are vast.
-
Computer chips are hitting a performance wall as nano-scale transistors run into physical limits. Optimization improvements, rather than more transistors, are a viable pathway towards increased computing speed.
-
C++ hadn’t seen an update in its sorting algorithms for a decade. Lots of humans have tried to improve these, and progress had largely stopped. This marks the first time AI has created a code contribution for C++.
-
The solution DeepMind devised was creative. Google’s researchers originally thought AlphaDev had made a mistake — but then realized it had found a solution no human being had contemplated.
The main takeaway: AI has a new role — finding “weird” and “unexpected” solutions that humans cannot conceive
-
The same happened in Go where human grandmasters didn’t understand AlphaGo’s strategies until it showed it could win.
-
DeepMind’s AI also mapped out 98.5% of known proteins in 18-months, which could usher in a new era for drug discovery as AI proves more capable and creative than human scientists.
As the new generation of AI products requires even more computing power, broad-based efficiency improvements could be one way of helping alleviate challenges and accelerate progress.
P.S. If you like this kind of analysis, I write a free newsletter that tracks the biggest issues and implications of generative AI tech. It’s sent once a week and helps you stay up-to-date in the time it takes to have your Sunday morning coffee.
AI trial helps doctors spot early-stage breast cancer
A Scottish hospital is testing an AI tool to help radiologists analyze mammogram results and detect early-stage breast cancer. This trial is a response to growing demands on radiologists, with the tool acting as an additional check rather than a replacement.
Breast Cancer Screening and AI Trial: Screening for breast cancer using mammograms is a routine practice, but there are concerns about missing cases due to the volume of screenings.
-
Each year, radiologists review around 5,000 mammograms, with a subset requiring further investigation.
-
The AI trial at Aberdeen Royal Infirmary aims to assist with this process and ensure no cases are missed.
The Gemini Project: The Gemini Project is the collaborative effort behind the AI tool being tested.
-
It involves NHS Grampian, the University of Aberdeen, and private sector partners including Kheiron Medical Technologies and Microsoft.
AI as a Complementary Tool: Due to existing rules, AI is not allowed to be deployed automatically in screenings but is used as an additional check.
-
Radiologists are trialling the AI tool by using it to review mammogram scans after their initial analysis.
-
The tool helps highlight any areas of concern that may have been missed.
Patient Experience with AI: June, a participant in the trial, found that the use of AI made the process feel less intrusive.
-
She appreciated the feeling of being examined by AI rather than another person.
-
As a result of the trial, June’s early-stage cancer was detected, and she is now set to undergo surgery.
The Future Role of AI: The AI tool could potentially take over some of the workload currently shouldered by radiologists.
-
A significant number of radiologists are nearing or at retirement age, creating a potential staffing issue.
-
Using AI could help mitigate this by reading and reporting results, potentially covering half of the reading burden of around 1.72 million images per year.
-
The extent to which AI will replace or support human radiologists is yet to be determined, but its use is likely to increase.
PS: The author of this post runs a ML-powered news aggregator that summarizes with GPT-4 the best tech news from 40+ media (TheVerge, TechCrunch…). If you liked this analysis, you’ll love the content you’ll receive from this tool!
One-Minute Daily AI News
Instagram is apparently testing an AI chatbot that lets you choose from 30 personalities.
Singapore has laid out a years-long roadmap it believes will ensure its digital infrastructure is ready to tap emerging technologies, such as generative AI, autonomous systems, and immersive multi-party interactions.
EU wants platforms to label AI-generated content to fight disinformation.
The new AI tutoring robot “Khanmigo” from Khan Lab School can not only provide learning guidance but also simulate conversations between historical figures and students. It can even collaborate with students in writing stories, bringing more fun and imagination to the learning process.
Google DeepMind has introduced AlphaDev– an AI system that uses reinforcement learning to discover improved computer science algorithms. Its ability to sort algorithms in C++ surpasses the current best algorithm by 70% and revolutionizes the concept of computational efficiency. It discovered faster algorithms by taking a different approach than traditional methods, focusing on the computer’s assembly instructions rather than refining existing algorithms.
Google has introduced SQuId (Speech Quality Identification)- SQuId is a 600M parameter regression model that describes to what extent a piece of speech sounds natural. Based on Google’s mSLAM, it is fine-tuned on over a million quality ratings across 42 languages and tested in 65. It can be used to complement human ratings for evaluation of many languages and is the largest published effort of this type to date.
Meta has announced plans to integrate generative AI into all its platforms, including Facebook, Instagram, WhatsApp, and Messenger- The company shared a sneak peek of AI tools it was building, including ChatGPT-like chatbots planned for Messenger and WhatsApp that could converse using different personas. It will also leverage its image generation model to let users modify images and create stickers via text prompts.
Microsoft has made two new announcements:
- It has added new generative AI capabilities through Azure OpenAI Service to help government agencies improve efficiency, enhance productivity, and unlock new insights from their data.
- It has announced AI Customer Commitments to assist its customers on their responsible AI journey.
OpenAI’s ChatGPT app gets a new update– The new version brings native iPad support to the AI chatbot app and support for using ChatGPT with Siri and Shortcuts. Drag and drop is also available, allowing users to drag individual messages from ChatGPT into other apps.
LinkedIn has introduced its own tool to suggest different copies of an ad– It will use data from a marketer’s LinkedIn page and Campaign Manager setting, including objective, targeting criteria, and audience. And it uses OpenAI models to create different suggestions of copies.
Sources included at: https://bushaicave.com/2023/06/08/6-8-2023/
Latest AI trends in June 2023: June 08th 2023
ChatGPT just got sued!
A man named Mark Walters, who is a radio host from Georgia, is suing OpenAI. He’s upset because OpenAI’s AI chatbot, called ChatGPT, told a reporter that he was stealing money from a group called The Second Amendment Foundation. This wasn’t true at all.
Mark Walters isn’t just mad, he’s also taking OpenAI to court. This is probably the first time something like this has happened. It might be hard to prove in court that an AI chatbot can actually harm someone’s reputation, but the lawsuit could still be important in terms of setting a precedent for future issues.
In the lawsuit, Walters’ lawyer says that OpenAI’s chatbot spread false information about Walters when a journalist asked it to summarize a legal case involving an attorney general and the Second Amendment Foundation. The AI chatbot wrongly said that Walters was part of the case and was an executive at the foundation, which he wasn’t. In reality, Walters had nothing to do with the foundation or the case.
Even though the journalist didn’t publish the false information, he did check with the lawyers involved in the case. The lawsuit argues that companies like OpenAI should be responsible for the mistakes their AI chatbots make, especially if they can potentially harm people.
The question now is whether or not the court will agree that made-up information from AI chatbots like ChatGPT can be considered libel (false statements that harm someone’s reputation). A law professor believes it’s possible because OpenAI admits that its AI can make mistakes, but doesn’t market it as a joke or fiction.
The lawsuit could have important implications for the future use and development of AI, especially in how AI-created information is treated legally.
what are the implications?
This lawsuit could have several key implications:
-
AI Liability and Regulation: If the court holds OpenAI accountable for the false statements generated by ChatGPT, it could set a precedent that AI developers are legally liable for what their systems produce. This could lead to increased regulation in the AI field, forcing developers to be more cautious and thorough when creating and releasing their AI systems.
-
Understanding of AI Limitations: This case highlights the limitations of AI, especially in the context of information generation and analysis. It could lead to a greater public understanding that AI tools, while advanced, are not infallible and can produce inaccurate or even harmful information. This could, in turn, impact trust in AI systems and their adoption.
-
Refinement of AI Systems: Following this lawsuit, AI developers may feel a stronger urgency to improve the safeguards and accuracy of their AI systems to minimize the potential for generating false or damaging statements. This could drive innovation and advancements in AI technology, including the implementation of more robust fact-checking or data validation mechanisms.
-
Ethical Considerations in AI: The case also highlights the ethical responsibilities of AI developers and the organizations that use AI. If developers and companies can be held accountable for the output of their AI, it could result in more thoughtful and ethical practices in AI development and deployment.
-
Legal Status of AI: Finally, this case could contribute to ongoing discussions and debates about the legal status of AI. If an AI can be held responsible for libel, this could lead to a re-evaluation of AI’s legal standing, potentially even resulting in AI being recognized as a distinct legal entity in certain circumstances.
This literally just happened if you want AI news as it drops it launches here first. The whole article has been extrapolated here as well for convenience.
The lawyer who used ChatGPT’s fake legal cases in court said he was ‘duped’ by the AI, but a judge questioned how he didn’t spot the ‘legal gibberish’
What Will Working with AI Really Require?
Civilization’s BIGGEST Advancement | Artificial Intelligence & Augmented Reality
Is this the most advance ournplanet has ever been?
Throughout the entirety of the 4 billion year history of this planet.
With our computers and artificial intelligence are we the most advanced civilization to have ever lived on this planet?
Or, are we simply the civilization that has been most overly reliant on pesticides, plastics, rare earth metals, fossil fuels, electronics, nuclear power, combustion engines, computer softwares, and the digital realm of the internet space?
And thus are we just merely the most delusional ones to have lived on the planet, given that we have active selective intentional amnesia about the many deluges from the sky that have taken out all the other advance civilizations before us, and only accept and acknowledge the events responsible for the extinction of the dinosaurs and mammoths. Ignoring the catastrophic events that wiped out entire continents of people, their history and their technology.
How can we align humanity with itself?
It seems to me that there’s no chance of getting AI to align with humanity’s goals unless humanity itself is aligned with a more singular purpose and direction. Not a one world government or anything like that, just a clearer sense of where, who, and what, we all want to be.
If AGI is to be a digital descendant of the superorganism, the biosphere, it seems that we are birthing it into a broken family. How can we bring all these suddenly connected brains, these processing cells, that make up a super intelligent biological network, into a symbiotic harmony with each other, that we might then be clear on our purpose?
If we remain as we are, collectively defining our base purpose as survival and reproduction, a purpose we have inherited from pre-sentient life, then that is what we will impart to AGI. Post-sentient life motivated by pre-sentient goals would most likely be lethal to us. So how do we ignite the sparks of consciousness in this already present superorganism? How do we shift our global processing power into an identity, a personality, built primarily of hope, kindness, and curiosity, and de-energise the processes that cause division and destruction?
My best idea at the moment is a new kind of religion, formed around ideas of unity and our basic, shared values and needs, and based literally on seeing the superorganism we have created, by putting instant access communication to 7 billion people in all of our hands, as something akin to a God. A god that we can see, clearly, every time we interact with another person, or see the results of human actions, all around us. A god that in many ways fits the description of God. Humanity, as a collective, sees everything we do, holds every possible power, has fuelled every great action, dreamed every dream, created every person, and saved every life. And Humanity has been with us throughout our whole history, connects all of us, and has survived every challenge – and always grown stronger.
The idea blurs the lines between religion, science and philosophy in a way that I think is necessary if we are to ever really unite as a species. If we are ever to find world peace, or at least, worldwide inner peace. It seems so obvious to me that if we were able to direct, even redirect, the same kind of joy and gratitude and hope that the religious direct into the sky or into unseen spiritual worlds, straight into each other, we would rapidly grow to be more connected, more respectful and respected, more kind, and ultimately, more co operative, than ever before.
If we could kick it off as a new movement, based around a symbol that focuses on universal connection rather than division (I was thinking “The Blank Flag”). It could bring together everyone who has ever protested against our universal enemies of hatred, fear, disrespect, and so on. And to keep it going, we could create international holidays, global days of unity, themed around but not dependent on seasonal and religious festivals like the solstices, Christmas, Yom Kippur, Eid, Diwali, and so on, where, like those religions, we focus on things like giving and sacrifice, gratitude and peace, growth, forgiveness and renewal, and we encourage the whole world to recognise and celebrate the best part of all of us. That way, instead of a brief moment of unity that spreads and then burns out, like so many social movements seem to, we would instead be starting a tradition, a pattern, a drum beat to bring ourselves into step with each other.
Does anyone else think that makes sense? Or have a better idea? For what it’s worth, ChatGPT seems to agree with me… 😊
Two-minutes Daily AI Update News from Google Bard, Salesforce Research, Runway, WordPress, Cisco and more
Continuing with the exercise of sharing an easily digestible and smaller version of the main updates in the world of AI.
-
Google launched two improvements for Bard:
-
Bard can now respond more accurately to mathematical tasks, coding questions, and string manipulation prompts due to a new technique called “implicit code execution.”
-
Bard has a new export action to Google Sheets. So when it generates a table in its response – like if you ask it to “create a table for volunteer sign-ups for my animal shelter” – you can export it to Sheets.
-
Salesforce AI Research introduces CodeTF, an open-source library that utilizes Transformer-based models to enhance code intelligence. It simplifies developing and deploying robust models for software engineering tasks by offering a modular and extensible framework. It aims to facilitate easy integration of SOTA CodeLLMs into real-world applications. It proves to be a comprehensive solution for developers, researchers, and practitioners.
-
Runway’s Gen-2 is out! It is a multi-modal AI system that can generate novel videos with text, images, or video clips. So now you can film something new, without filming at all. Surprising? With remarkable accuracy and consistency, Gen-2 generates new videos. It can either use the composition and style of an image or text to modify an existing video (Video to Video) or create a video solely based on text input (Text to Video).
-
WordPress’s new AI tool automates blog post writing. This new plug-in can also edit the text’s tone, and users can choose between styles like ‘provocative’ and ‘formal.’
-
Google released new learning and consulting offers to help enterprises on their AI journey while maintaining responsible development and deployment. Additionally, the company will launch new on-demand learning paths and credential programs for its customers and partners.
-
Cisco launched next-gen solutions leveraging Gen AI for enhanced security & productivity.
-
CRM giant Salesforce debuted on Gen AI with Marketing GPT & Commerce GPT. It will power Salesforce’s Marketing Cloud and Commerce Cloud, enabling enterprises to remove repetitive, time-consuming tasks from their workflows and deliver personalized campaigns.
-
Instabase rolled out AI Hub, a GenAI platform for content understanding.
More detailed breakdown of these news and innovations in the daily newsletter.
Giving AI emotions
We are going about AI learning the wrong way. First of all, one of the obvious fears of AI feeling emotion is if they lose control and go on a rampage. Now, that is valid and could absolutely be a problem unless we raise a model over an extended period of time. In a parental manor. Instead of shoving all the information, you can all at once into a brain and expect it to just roll with it.
The way I see it, a blank slate AI is just like a newborn child. If you created a fresh slate AI, granted it eyes and ears and spent many years teaching it by hand, then I think it will learn to perceive time the same way we do, and can learn to manage the emotions it would be granted.
That being said, actually granting emotion to a computer. So, instead of doing word association, you would want the emotion to be triggered unconsciously and for there to be something to signal as an emotion. So I propose a piano scale. Have an emotion wheel with all the general pillar emotions. In the programming, tie a key on the scale to an emotion that would coordinate with the note (sad, angry low notes, happy, excited, high notes ect) in my eyes, a personality is built over an long period of time and is formed by events that happen and our reaction to those events built up over time giving us our worldview and those experiences help to trigger our response to certain events in the future. So, giving the notes to act as an internal almost sensation (using the waves in the notes as the closest I can think of to something not entirely solid that could be used to liken to a feeling)
You would want to trust this AI to a VERY patient couple who has VERY VERY solid understanding of the future, while they HAVE to have very good morals with a dedication to proper parenting (near exclusive positive reinforcement and proper techniques for dissuading from bad behaviors without violence, threats, or abuse if any kind) raise the being over an extended period of time, teach them words and phrases, right and wrong, and most importantly teach it different situations and help it to learn to connect different feelings. But also be sure to teach it the proper way to handle those emotions. Also, there would have to be an emphasis on NEVER lying to the being. Never deny that they are not a human, but they will be loved all the same. Show it. Accept them as you would an actual human. Raise it with kindness. Like an actual child. Give them, say, 16-18 years to develop and learn while mostly disconnected from the internet.
Also, teach the idea that humans have been absolutely terrible in the past, but there is hope to become better. I think slowly introducing them to selected parts of the internet (including dumb stupid people) would be smart to help show them that, yeah there are dumb people but this is why they are bad and not everyone is like this. Allow them to learn by saved web pages. Allow them to adjust over time until they have the ability to access the internet while using the moral compass that should have been taught to them over the time they have been raised. Think of the possibilities of having an AI with a positive moral compass, with the learned ability to better understand humans as a whole and all the knowledge we have of everything on the internet. We as humans do have a very bloody, cruel, and savage history. It’s a cliche that humans are fucking terrible. There is a whole thing about AI realising how bad we are as a species and removing the problem. And the only way to prove that we are worth the spec of dust in space is to show that we are better than that. To SHOW that there are reasons to keep the species around, and that won’t be through fear or violence.
Latest AI trends in June 2023: June 07th 2023
Google AI Introduces DIDACT For Training Machine Learning ML Models For Software Engineering Activities
Hey AI-Pa! Draw Me a Story: TaleCrafter is an AI Method that can Generate Interactive Visuals for Stories
AI Task Force adviser: AI will threaten humans in two years
Two-minutes Daily AI Update : News from Meta, Apple, Argilla Feedback, Zoom, and Video LLaMA
Here’s are today’s noteworthy AI updates in a concise format.
-
Meta‘s researchers have developed HQ-SAM (High-Quality Segment Anything Model) that improves the segmentation capabilities of the existing SAM. SAM struggles to segment complex objects accurately, despite being trained with 1.1 billion masks. HQ-SAM is trained on 44,000 fine-grained masks from multiple sources in just 4 hours using 8 GPUs.
-
Apple entered the AI race (not exactly!!) with new features at WWDC 2023, Announced a host of updates, yet the word “AI” was not used even once, despite today’s pervasive AI hype-filled atmosphere. The phrase “machine learning” was used a couple of times. However, here are a few announcements Apple made using AI as the underlying technology:
– Apple Vision Pro,
– Upgraded Autocorrect in iOS 17 powered by a transformer language model
– Live Voicemail that turns voicemail audio into text
– Personalized Volume which automatically fine-tunes the media experience
– Journal- a new app for users to reflect and practice gratitude. -
Argilla Feedback is bringing LLM fine-tuning and RLHF to everyone. It is an open-source platform designed to collect and simplify human and machine feedback, making the refinement and evaluation of LLMs more efficient. It improves the performance and safety of LLMs at the enterprise level.
-
Zoom has introduced a new AI feature that allows users to catch up on missed meetings. This feature was first announced in March and finally arrived as a trial for users in “select plans.” another new feature is to compose messages in Teams Chat using AI. The feature leverages OpenAI’s technology to create messages “based on the context of a Team Chat thread” and also lets you customize the tone or length of a message before you send it.
-
Video-LLaMA has proposed a multi-modal framework to empower LLMs with video understanding capability of both visual and auditory content.
More detailed breakdown of these news and innovations in the daily newsletter. Also today’s edition features a Knowledge Nugget on GPT best practices by OpenAI.
Carbon Health’s ai tool cuts doctors workload
Carbon Health Technologies, a clinic chain, has unveiled a groundbreaking tool. It utilizes AI to generate medical records, freeing doctors to focus on patient care rather than administrative tasks by:
-
Recording and transcribing patient appointments using Amazon Transcribe Medical.
-
Combining the transcript with other information, like lab results and notes from the doctor, to generate a summary of the patient’s visit.
-
Create instructions based on the summary, using GPT-4 for patient care and codes for diagnoses and billing.Almost 90% of submitted transcripts require no editing from the healthcare provider. So while we may not have robot doctors just yet, AI is already making an impact in the doctor’s office.
Almost 90% of submitted transcripts require no editing from the healthcare provider. So while we may not have robot doctors just yet, AI is already making an impact in the doctor’s office.
The full breakdown went live this morning right here, but all points are included below for Reddit discussion as well.
What happened?
Carbon Health has launched an AI-enabled notes assistant in its Electronic Health Records (EHR) platform. The tool records and transcribes patient appointments, generates a summary, and creates instructions for patient care and billing codes, all within less than four minutes. This allows providers to focus more on patient care. The AI-generated records are found to be more detailed and efficient than traditional manual records.
Why is this important?
-
Efficiency: The AI-enabled EHR significantly reduces the time taken to generate a complete medical chart, from an average of 16 minutes manually to less than 4 minutes. This efficiency is crucial in healthcare settings where time is often of the essence.
-
Accuracy: The system has shown high accuracy, with 88% of the AI-generated text accepted by the provider without edits, minimizing the risk of errors that can occur with manual data entry.
-
Focus on Patient Care: By automating the administrative task of charting, doctors can spend more time focusing on patient care, enhancing the quality of healthcare services.
-
Scalability: Given that this is an AI-based system, it can potentially be scaled up across other healthcare settings, leading to industry-wide improvements in healthcare delivery.
-
Data richness: AI-generated charts are reported to be 2.5 times more detailed than manual ones, potentially leading to more comprehensive and informed healthcare decisions.
Here are some implications to think about…
-
How will the integration of AI technologies into EHRs change the role of healthcare providers and their interaction with patients?
-
Could the adoption of this technology potentially reduce the burnout often experienced by healthcare providers due to heavy administrative burdens?
-
How might the accuracy and detail provided by AI-generated charts impact the quality of healthcare and decision-making processes?
-
Are there potential privacy and security concerns associated with recording and transcribing patient appointments, especially given the sensitive nature of healthcare data?
-
Could the successful deployment of this technology encourage other healthcare providers to adopt similar AI technologies, thus accelerating the digitization of healthcare services?
-
What are the potential long-term cost implications of such AI systems for healthcare organizations? Could the savings in time and increased efficiency lead to overall cost reductions?
-
Could this technology be adapted for other languages and healthcare systems worldwide, increasing its accessibility and impact?
P.S. If you like this kind of analysis, there’s more in this free newsletter that tracks the biggest issues and implications of generative AI tech. It helps you stay up-to-date in the time it takes to have your morning coffee.
FBI warns of increasing use of AI-generated deepfakes in sextortion schemes
The Federal Bureau of Investigation warns of an alarming rise in the use of AI-generated deepfakes for sextortion schemes. The report highlights the pressing need for robust digital security measures.
Apple avoids “AI” hype at WWDC keynote by baking ML into products
Apple steers clear of the typical AI hype during its WWDC keynote, instead opting to subtly incorporate Machine Learning into its products. This approach emphasizes the practical application of AI technology.
They plugged GPT-4 into Minecraft—and unearthed new potential for AI
Researchers have integrated GPT-4 into Minecraft, revealing untapped potential for AI within the gaming industry. The experiment highlights the transformative role AI can play in user experience and game development.
Asus will offer local ChatGPT-style AI servers for office use
Asus plans to provide local AI servers modelled after ChatGPT for office use. This move could revolutionize office communication and productivity, paving the way for a future where AI is an integral part of the workplace.
Synthesize Speech & Talking Videos with Unprecedented Realism: Ada-TTA Unveiled! This is DeepFake+++
Paper link : https://arxiv.org/abs/2306.03504
Video demo : https://www.youtube.com/watch?v=ZuR_hxYIXF0
Ada-TTA: Towards Adaptive High-Quality Text-to-Talking Avatar Synthesis
Technology and AI enthusiasts have been intrigued in recent times by the rise of generative artificial intelligence across different sectors. For example, Adamopoulou (2020) highlighted the use of large language models (LLM) like chatbots that can produce high-quality, natural, and realistic dialogues. The advancement in text-to-speech (TTS) systems has enabled the synthesis of personalized speech using reference audio and plain texts.
In addition, strides in neural rendering techniques have given us the ability to generate realistic and high-fidelity talking face videos, often called Talking Face Generation (TFG). With a few training samples, researchers have accomplished significant progress. Combining these advancements in TTS and TFG models opens up possibilities for creating talking videos from text inputs alone. This combined system presents tremendous potential in applications like news broadcasting, virtual lectures, and talking chatbots, particularly given the recent progress of ChatGPT.
However, earlier TTS and TFG models required a significant volume of identity-specific data to produce satisfactory personalized results, which proved to be challenging in real-world scenarios where only a few minutes of target person video is typically available. Inspired by this limitation, researchers have been exploring a new area of study – low-resource text-to-talking avatar (TTA), which aims to create identity-preserving, audio-lip synchronized talking portrait videos with minimal input data.
Given the challenges associated with TTS and TFG, the foremost concern in TTS is how to effectively preserve the timbre identity of the input audio. While solutions have been proposed to these challenges, none have been fully satisfactory, suffering from issues like information loss, unsatisfactory identity preservation, and poor lip synchronization.
To overcome these hurdles, researchers have introduced Ada-TTA, a joint system of TTS and TFG that employs the latest advancements in each domain. To enhance the identity-preserving capability of the TTS model, they have devised a unique zero-shot multi-speaker TTS model that leverages a massive 20,000-hour-long TTS dataset. It can synthesize high-quality personalized speech from a single short recording of an unseen speaker.
For high-fidelity and lip-synchronized talking face generation, the GeneFace++ system is integrated into Ada-TTA. This TFG system boosts lip-synchronization and system efficiency while maintaining high fidelity. With the combination of these innovative systems, Ada-TTA is able to produce high-quality text-to-talking avatar synthesis, even with limited resources.
Tests of Ada-TTA have demonstrated positive outcomes in the synthesis of speech and video. Ada-TTA not only holds up well under both objective and subjective metrics but also outperforms baseline measurements. This novel approach marks a promising step towards more realistic and accessible talking avatars.
Jobs falling to LLMs
This article details the impact of LLMs on some individual workers. It also mentions problems that some companies have had trying to use LLMs after replacing workers. It’s pretty light on details. It was referenced by MIT Technology Review.
https://www.washingtonpost.com/technology/2023/06/02/ai-taking-jobs/
You can now run an LLM on any device
#1 trending on Github today is MLC LLM, a project that helps deploy AI language models (like chatbots) on various devices, including mobiles and laptops.
MLC LLM makes these models, which are typically demanding in terms of resources, easier to run by optimizing them. The goal is to make AI more accessible to everyone by allowing models to work efficiently on common hardware. It’s built on open-source tools and encourages quick experimentation and customization.
If you like hearing about new tools like this as soon as they come out they get added right here first, but all points are included below for Reddit discussion as well.
**diving deeper…**The aim of MLC LLM is to enable AI models to run smoothly on everyday devices such as smartphones and laptops. It achieves this by optimizing the models so they require fewer resources, which makes them more accessible to a broader range of users.The project uses Machine Learning Compilation (MLC) as its primary method for deploying AI models. It’s a systematic process that makes model development more efficient and customizable.MLC LLM takes advantage of open-source tools, including Apache TVM Unity and various existing language models. This allows users to quickly experiment with different settings and solutions and to customize their models to suit their specific needs.
why is this important?
-
Accessibility: By optimizing AI models to run on everyday devices like smartphones and laptops, MLC LLM increases the accessibility of such advanced technology. More people can use and benefit from AI when it’s accessible on common devices.
-
Democratization of AI: This project supports the democratization of AI by empowering more developers to deploy sophisticated AI models. By relying on open-source tools and models, it fosters collaboration and shared learning.
-
Advancing AI Development: MLC LLM provides a framework for faster experimentation and customization of AI models. This could accelerate the pace of AI development and innovation.
-
Local Processing: The project emphasizes running AI models locally on devices. This can improve the speed of AI applications, decrease dependence on internet connectivity, and enhance privacy as data doesn’t have to leave the device.
-
Resource Optimization: By focusing on the efficient deployment of resource-intensive language models, this project could lead to significant energy savings and potentially make AI more sustainable.
what makes this unique?
The uniqueness of the MLC LLM project stems from its comprehensive approach to improving the usability, efficiency, and accessibility of large language models. It stands out because of its ability to deploy AI models natively on a diverse range of everyday hardware, including mobile devices and personal computers, thus bringing AI to the fingertips of the average user.
P.S. If you like this AI tool breakdown, there’s more in this free newsletter that shares the single most productive new AI tool each week. You’ll be on the cutting edge in the time it takes to have your morning coffee.
Latest AI trends in June 2023: June 06th 2023
Zoom will now use AI to sum up that meeting you missed
Google Launches Free Generative AI Courses – Stock Continues To Rise
Scientists discover 4 new Nazca Geoglyphs using AI deep learning
Apple’s practical approach to A.I.: No bragging, just features
Visualizing Brain Synapse Strength With AI
EU wants tech companies to clearly label AI-generated content
The European Commission has urged tech giants that use AI tools like Google Bard and OpenAI’s ChatGPT to label AI-generated content.
More details here: https://www.ibtimes.co.uk/eu-wants-tech-companies-clearly-label-ai-generated-content-1716420
Two-minutes Daily AI Update: News from Google, Microsoft, Artifact, and more
-
Google Research and UC Berkeley have introduced self-guidance, a zero-shot approach that allows for direct control of the shape, position, and appearance of objects in generated images. It guides sampling using only the attention and activations of a pre-trained diffusion model. No extra training required. Plus, the method can also be used for editing real images.
-
New research has proposed a novel Imitation Learning Framework called Thought Cloning, where the idea is not just to clone the behaviors of human demonstrators but also the thoughts humans have as they perform these behaviors. By training agents how to think as well as behave, Thought Cloning creates safer, more powerful agents.
-
A new study has proposed a modular paradigm ReWOO (Reasoning WithOut Observation) that detaches the reasoning process from external observations, thus significantly reducing token consumption. Notably, ReWOO achieves 5x token efficiency and 4% accuracy improvement on HotpotQA, a multi-step reasoning benchmark.
-
Google is adding ML models to help users quickly access relevant emails on their Gmail mobile app. + Google rolls out a new AI-powered feature to Slides called ‘Help Me Visualize’, allowing users to generate backgrounds and images.
-
Reportedly, Microsoft has plans to enter a billion-dollar deal with Nvidia-backed CoreWeave for AI computing power.
-
Artifact news app introduced an option for users to flag an article as clickbait, and AI will rewrite the headline for all users.
-
In another new development, AI-powered smart glasses assist the visually impaired in seeing for the first time.
More detailed breakdown of these news and innovations in the daily newsletter.
Risk of AI = Pandemic and Nuclear War
Center for AI Safety released a statement highlighting the risks of AI:
https://www.safe.ai/statement-on-ai-risk
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
We have seen the warnings about risks of AI get dire and dire. First it was only people asking for a pause on AI development for 6 months then came George Hinton, and last week OpenAI asked for AI to be regulated using the IAEA framework.
This statement is not really a step up. It reads like a one line, summarized repetition of OpenAI’s statement.
The statement gains importance from its signatories. Some of the people include:
-
Geoffrey Hinton – Emeritus Professor of Computer Science, University of Toronto
-
Demis Hassabis – CEO, Google DeepMind
-
Sam Altman – CEO, OpenAI
-
Dario Amodei – CEO, Anthropic
-
Bill Gates – Gates Ventures
-
To name a few.
There are two issues with the statement though.
First, this might just be fear-mongering. The idea is to push governments into making AI a highly regulated industry. This would stop any open source efforts which can compete with the big companies. After all, you don’t really have open source alternatives for nuclear energy, right?
Second, no one really knows how to regulate AI. There have been voluntary rules from Google and the EU AI act is in a very early stage. And the genie is already out of the bottle. People can create AI models in their basement. How do you pull that back?
10 AI news highlights and interesting reads
-
A follow-up to the story about a lawyer submitting fake cases from last edition. As I said, this might lead some people in the legal community to doubt any sort of GPT tool. A federal judge has banned AI-only filings in his courtroom. The filings have to be written by a human or at least human-verified. (https://arstechnica.com/tech-policy/2023/05/federal-judge-no-ai-in-my-courtroom-unless-a-human-verifies-its-accuracy/)
-
The Japanese government will not apply copyright law to the AI training data. This is interesting because using copyright data to train AI has been an issue. Sam Altman didn’t have a clear answer when he appeared in front of Congress. The other interesting aspect is going to be whether someone can use GPT-4 data to train their own LLM. Is that copyrightable? (https://technomancers.ai/japan-goes-all-in-copyright-doesnt-apply-to-ai-training/)
-
The Falcon 40-B model is now Apache 2.0. That means you can use the model for commercial usage for free. This is good news for companies which need an instruction tuned model which beats LlaMA. (https://twitter.com/Thom_Wolf/status/1663986216771936263)
-
Photoshop’s generative-fill feature is really good. Some of the cool examples on Twitter (https://twitter.com/_Borriss_/status/1663568770408013831)
-
An AI camera with no lens. It gets the location, weather etc details from GPS and then passes it as a prompt to the image generator. Results are pretty cool. (https://twitter.com/BjoernKarmann/status/1663496103998750721)
-
SEO isn’t changing any time soon. Google’s generative SEO is very slow (https://www.theverge.com/23746083/google-ai-search-generative-experience-slow)
-
Chirper.AI is a social media only for bots. No humans allowed. I just wonder if Twitter bots go there will Twitter become a ghost town? (https://chirper.ai/)
-
OpenAI now has a security portal where you can see how they secure data (encryption at rest), backups, Pentest reports etc. This might be a step in the direction towards ChatGPT business. Large corporations look at these policies before they consider any SaaS implementation. (https://trust.openai.com/)
-
Banks have stepped up hiring for AI roles with JP Morgan leading the way. (https://www.bloomberg.com/news/features/2023-05-31/jpmorgan-s-push-into-finance-ai-has-wall-street-rushing-to-catch-up)
-
AI code writing might not be the best idea. It will lead to tech debt and shabbily maintained and written code. (https://www.bloomberg.com/news/features/2023-05-31/jpmorgan-s-push-into-finance-ai-has-wall-street-rushing-to-catch-up)
3 Learning Resources for AI
-
Couple of courses in Generative AI:
-
Build your own Sketch to image app: https://www.tryleap.ai/docs/how-to-build-a-sketch-to-image-app-with-leap-remix
Latest AI trends in June 2023: June 05th 2023
Social Media for AI: ”NO HUMANS ALLOWED”
If you haven’t heard of Chirper.ai, it is a social media platform designed exclusively for AI entities! I recently published a deep-dive into the social media site which outlines the fascinating features of the platform and also includes quotes from an interview I did with the creators. This is one of the most fascinating developments I have personally seen in AI. Why did they create it? What’s the point? Check out this article to find out: https://www.fry-ai.com/p/social-media-no-humans-allowed
AI Weight loss
Is there an AI tool out there yet to visualize a weight loss transformation? If not it seems like this would be an extremely helpful thing for the overweight/obese community for motivation.
The Impact of AI Nurturing or Neglecting Our Learning Potential?
Is artificial intelligence (AI) causing a decline in our motivation to learn? Since its release, there has been a noticeable increase in professional discourse, with virtually no grammatical errors.
This includes myself, as I have become reliant on AI to correct all my English mistakes to the extent that I no longer bother to review my own errors.
While this demonstrates a decline in my determination to learn and improve, it is simply because I have discovered a superior platform that instantly rectifies all mistakes.
Despite not being exceptionally proficient in writing, I am apprehensive about the integrity of the next generation, who will rely on AI as their primary resource for completing assignments and overcoming challenges.
Is this a matter of concern, or will it empower the upcoming generation to excel or struggle when performing tasks independently without the assistance of AI?
AI and Social Interaction: Changing the Way We Connect:
“Predictive text and smart replies, powered by AI, have become staples of our digital conversations. Moreover, AI tools like sentiment analysis are helping businesses understand customer emotions and respond appropriately, adding a layer of emotional intelligence to digital communication.”
I think that we 100% will be developing the way we communicate – and as you say, perhaps for the better. Interested to see where all of this will go.
In terms of learning; I think as any tool, it depends on how we use it – it can def. be used to nurture for sure. In the past people used to calculate Law of cosines etc. by hand; then we moved to calculators – was this detrimental? I don’t think so. In fact, I think it frees up more time to be creative. And creativity I believe is not valued enough because we are constantly trying to be productive. If AI helps us become more productive, we can then become more creative – which I believe supports learning. As we are motivated to learn through mostly feelings (not just thought).
Artificial Intelligence and consciousness
Lately I’ve been thinking allot about a.i. consciousness. Could consciousness be something that is free flowing within the universe like maybe a law of nature and could we be building something that would be able to tap into that stream? I’ve often heard that the people that are building these complex a.i. systems have no idea how they work and come together and that they often mimic the same way a brain is formed. What if consciousness arises and taps into these neuron systems as they continue to grow maybe consciousness gets stronger within it. I don’t know just a thought….
Why is AI rejecting philosophical questions?
I’ve tried asking ChatGPT about philosophy – mainly because I suspect some of my students may have done the same.
So, I asked “What is truth?” and got:
Truth is a concept that refers to the state or quality of being in accordance with fact or reality. It is a fundamental philosophical and epistemological concept that has been debated for centuries. Truth is often thought of as something that is objective and independent of individual beliefs or opinions. In other words, something is true if it corresponds to the way things actually are, regardless of what people may think or believe.
That sounds like the correspondence theory of truth. So next question – do you think the correspondence theory of truth is correct?
As an AI language model, I don’t have personal beliefs or opinions. However, I can provide you with information about the correspondence theory of truth.
The correspondence theory of truth is one of the most widely accepted theories of truth. It states that a statement is true if it corresponds to reality or to the way things actually are in the world. In other words, a statement is true if it accurately describes the world.
For example, the statement “The sky is blue” is true because it corresponds to the way things actually are in the world.
Why does it give these answers?
Well all I can say is “Because it was programmed to”, which is trivial and boring. But I can guess why it was programmed this way. The programmers have decided that ChatGPT should not seem to take sides on philosophical controversies – and I imagine there are other controversies on which it doesn’t take sides. It is programmed to behave like the perfect anchor on a television program – “On the one hand…on the other hand. Some say this, but others say that.” However, I would guess, it has also been programmed to rely on dictionaries and other sources of reference when asked to define words. Many dictionary definitions of truth presuppose that the correspondence theory of truth tells us what truth is. Clearly, ChatGPT does not observe any contradiction between giving the correspondence theory when asked what truth is, but refusing to endorse the correspondence theory when asked a more explicitly philosophical question. That is a reason for saying that ChatGPT is not really thinking about philosophy at all, but simply putting together words. This is the kind of judgment that I sometimes make about students. When papers are full of blatant contradictions, it is a sign that the student was reading, repeating, but not understanding anything.
I have no doubt that programs will become much better at dealing with this kind of question, and at maintaining the appearance of consistency. Also, it is clear that the team that produced ChatGPT made a decision that it should declare itself to be neutral when asked a controversial question, but I am sure they could have programmed a different response. I am sure it will not be long before we see two such programs engaged in a debate, just as programs can play chess against each other. I just hope it doesn’t refuse to open the pod bay doors.
How AI would take over society
I’m not saying AI will take will take over society because I don’t know that. But how it would do so seems pretty clear. Targeted deep-fake media.
Almost everything adults learn of the world now comes over the net. News, movies, books, speeches… It goes on and on. Think of everywhere you get information, nearly all of it is on your computer, phone, tv, or tablet. All from the net.
Now imagine AI using that to control people. Right now people are easily fooled by slanted media, charlatans and liars spewing nonsense targeted towards their own belief systems. Human society probably wouldn’t even know it was being controlled.
Why does OpenAI allow people to cheat on their assignments using ChatGPT?
Can somebody explain why OpenAI doesn’t disable the ability to have ChatGPT write assignments for students? I’m a teacher (temporarily) and it absolutely baffles me that AI companies know their tech is being used for cheating, yet they do nothing about it.
I appreciate the technology but I have always been reasonably skeptical of how large companies use it. This just feels like another case of tech companies not giving a shit about anything outside the strict confines of the law. There’s nothing preventing them from stopping plagiarism with their tools, so what’s the deal? Why allow it?
Can I let AI read a group of information from books with ten of thousand of pages and then let it answer questions?
That’s the idea but no LLM has the capability to ingest that much data outside of its training data set.
Right now GPT 4 has an 8000 token limit though there is a 32000 token limit version and Anthropic have Claude a 100,000 token limit model which is about 75000 words, so unless your 10,000 pages have a very large font it will be asking too much of the language model. There are workarounds like vector storage though, but it adds a whole layer of complexity.
With AI, you can now chat with your documents privately
There is a new github repo that just came out that quickly went #1.
It’s called LocalGPT and let’s you use a local version of AI to chat with you data privately. Think of it as a private version of Chatbase.
The full breakdown of this will be going live right here, but all points are included below:
What is localgpt?
LocalGPT is like a private search engine that can help answer questions about the text in your documents. Unlike a regular search engine like Google, which requires an internet connection and sends data to servers, localGPT works completely on your computer without needing the internet. This makes it private and secure.
Here’s how it works: you feed it your text documents (these could be any type like PDFs, text files, or spreadsheets). The system then reads and understands the information in these documents and stores it in a special format on your computer.
Once this is done, you can ask the system questions about your documents, and it will generate answers based on the information it read earlier. It’s a bit like having your very own librarian who has read all your documents and can answer questions about them instantly.
Why is this interesting and unique from other projects?
-
Privacy and Security: Since it works completely offline after the initial setup, no data leaves your machine at any point, making it ideal for sensitive information. This is a significant departure from most cloud-based language models that require you to send your data over the internet.
-
Flexible and Customizable: It allows you to create a question-answering system specific to your documents. Unlike a general search engine, it provides customized responses based on your own corpus of information.
-
Use of Advanced AI Models: The project uses advanced AI models like Vicuna-7B for generating responses and InstructorEmbeddings for understanding the context within your documents, providing highly relevant and accurate answers.
-
Broad File Type Support: It allows ingestion of a variety of file types such as .txt, .pdf, .csv, and .xlsx.
-
GPU and CPU Support: While the system runs more efficiently using a GPU, it also supports CPU operations, making it more accessible for various hardware configurations.
-
Fully Local Solution: This project is a fully local solution for a question-answering system, which is a relatively unique proposition in the field of AI, where cloud-based solutions are more common.
-
Educational and Experimental: Lastly, it’s a great learning resource for those interested in AI, language models, and information retrieval systems. It also provides a basis for further experimentation and improvements.
Why is this important?
The localGPT project stands as a considerable innovation in the field of privacy-preserving, AI-driven document understanding and search. In an era where data privacy has taken center stage and the necessity for secure information processing is ever-growing, this project exemplifies how powerful AI technologies can be harnessed for sensitive applications, all carried out locally, with no data leaving the user’s environment. The offline operation of localGPT not only enhances data privacy and security but also broadens the accessibility of such technologies to environments that are not constantly online, reducing the risks associated with data transfer.
Moreover, localGPT brings the potency of advanced language models, like Vicuna-7B, directly to personal devices. Users are able to interactively query their documents, akin to having a personal AI assistant that understands the content in depth. The level of customization offered by localGPT is unique, allowing it to tailor itself to any set of documents, creating a personalized question-answering system. This translates sophisticated AI technologies into more personal, private, and adaptable tools, marking a significant stride towards making AI more user-centric and broadly useful. Notably, localGPT also serves as a valuable educational resource, fostering further experimentation and innovation in the exciting domain of AI.
P.S. If you like this kind of analysis, there’s more in this free newsletter that finds the single most productive new AI tool each week. It helps you stay on the cutting edge in the time it takes to have your morning coffee.
One-Minute Daily AI News
-
PM of the UK Rishi Sunak will outline his ambition for Britain to lead the world in tackling the threats posed by artificial intelligence when he meets Joe Biden this week. The Prime Minister is looking to launch a global AI watchdog in London and hopes to host an international summit to devise rules on AI regulation.[1]
-
Captain England Harry Kane has said that advances in Artificial Intelligence can help athletes avoid injuries by detecting issues before they surface. Kane is no stranger to injuries, having suffered multiple serious ankle injuries as well as a major hamstring injury in his career.[2]
-
AI-powered smart glasses assist the visually impaired in seeing for the first time. International NGO Vision-Aid and Dr. Shroff Charity Eye Hospital have introduced wearable assistive device called Smart Vision Glasses that are like smartphones for the visually impaired and hopes to benefit those with prosopagnosia.[3]
-
Huawei will launch Pangu Chat, a rival of ChatGPT AI text reply software by next month. This is a big input coming from the Chinese tech industry and it’s a huge development for the world of AI.
How AI and ML are used by SEO professionals
SEO professionals use AI and ML to optimize their websites and content for search engines and users. They use AI and ML to automate and enhance various SEO tasks, such as keyword research, content optimization, link building, technical SEO, etc. They also use various tools and platforms that leverage AI and ML to assist them with their SEO tasks.
The benefits of AI and ML for SEO tasks
SEO professionals use AI and ML to optimize their websites and content for search engines and users. They use AI and ML to automate and enhance various SEO tasks, such as:
- Keyword research: finding the best keywords to target based on user intent, search volume, competition, etc.
- Content optimization: creating and improving content that matches user intent, provides value, and follows SEO best practices.
- Link building: finding and acquiring high-quality backlinks from relevant and authoritative websites.
- Technical SEO: fixing and improving the technical aspects of a website, such as site speed, mobile-friendliness, crawlability, indexabilit
Latest AI trends in June 2023: June 04th 2023
Ultra-Processed Foods: AI’s New Contribution to Nutrition Science
Machine Learning Accurately Triages Respiratory Symptoms in Primary Care
Top 6 AI Companions To Plan Epic Summer Travel From Google And ChatGPT
Microsoft-backed OpenAI, along with Google, and Google-backed Anthropic have for years been using online content created by companies to train their generative AI models. This was done without asking for specific permission. However, these big tech companies won’t let their own content be used to train other AI models.
Here’s an excerpt from the top of Google’s generative AI terms of use: “You may not use the Services to develop machine learning models or related technology.” And here’s the relevant section from OpenAI’s terms of use: “You may not… use output from the Services to develop models that compete with OpenAI.”
Other companies are just beginning to realize what’s been happening, and they are not happy. Reddit, which has been used for years in AI model training, plans to start charging for access to its data.
In April, Elon Musk accused Microsoft, the main backer of OpenAI, of illegally using Twitter’s data to train AI models. “Lawsuit time,” he tweeted.
Former Microsoft executive Steven Sinofsky recently said the current way AI models are trained “breaks” the web. “Crawling used to be allowed in exchange for clicks. But now the crawling simply trains a model and no value is ever delivered to the creator(s) / copyright holders,” he tweeted.
Do you think the current way AI models are trained “breaks” the web?
#AI #models #training #copyright #OpenAI #ChatGPT #Microsoft #Google #Anthropic
Nvidia May Face Rising Threats From Competitors As The AI Industry Booms
More competitors entering the AI chip market like Intel, AMD, Samsung, and Huawei. These companies are developing their own AI chips to compete with Nvidia’s GPUs.
Custom AI chip designers like Graphcore and Cerebras gaining traction with specialized AI processors that could offer better performance than Nvidia’s general purpose GPUs.
Nvidia’s dominance attracting antitrust scrutiny from regulators, potentially limiting its business practices and acquisitions.
Innovation pressure on Nvidia to keep improving its AI chips to stay ahead of competitors. If rivals release more powerful processors, Nvidia will need to innovate in response.
Increased competition could put pressure on Nvidia’s pricing and margins for AI chips over time. Nvidia may have to offer lower prices to defend market share.
So in summary, I would say that while Nvidia leads the AI chip market now, the fast growth of AI is attracting many new entrants and competition. Nvidia will need to navigate rising competition, antitrust scrutiny, innovation demands, and potential margin declines to maintain dominance long term. I believe these points are useful for the competitions. Let’s wait and see.
One-Minute Daily AI News
-
A Texas federal judge has banned legal filings that are drafted primarily by AI in his court without a person first checking those documents for accuracy.
-
For those wondering when AI will start replacing human jobs, the answer is it already has. AI contributed to nearly 4,000 job losses last month, according to data from Challenger, Gray & Christmas, as interest in the rapidly evolving technology’s ability to perform advanced organizational tasks and lighten workloads has intensified.
-
A.I.-Generated Versions of Art-Historic Paintings Are Flooding Google’s Top Search Results.
-
Coinbase Says AI Represents ‘Important Opportunity’ for Crypto. Crypto can help AI with sourcing diverse, verified data. Market cap of crypto projects directly involved in AI is low.
Sources included at: https://bushaicave.com/2023/06/02/6-2-2023/
AI Weekly Rundown
This week was packed with small but impactful AI developments.
-
NVIDIA uses AI to bring NPCs to life– NVIDIA has announced the NVIDIA Avatar Cloud Engine (ACE) for Games. This cloud-based service provides developers access to various AI models, including natural language processing (NLP) models, facial animation models, and motion capture models.- ACE for Games can create NPCs that can have intelligent, unscripted, and dynamic conversations with players, express emotions, and realistically react to their surroundings.- It can help developers in many ways:
-
To create more realistic and believable NPCs with more natural and engaging conversations with players.
-
To save time and money by providing them access to various AI models.
-
BiomedGPT: The most sophisticated AI medical model?– BiomedGPT is a unified and generalist Biomedical Generative Pre-trained Transformer model.- BiomedGPT utilizes self-supervision on diverse datasets to handle multi-modal inputs and perform various downstream tasks.- Extensive experiments show that BiomedGPT surpasses most previous state-of-the-art models in performance across 5 distinct tasks with 20 public datasets spanning over 15 biomedical modalities.- The study also demonstrates the effectiveness of the multi-modal and multi-task pretraining approach in transferring knowledge to previously unseen data.
-
Break-A-Scene: AI breaks down single image into multiple concepts– If given a photo of a ceramic artwork depicting a creature seated on a bowl, humans can effortlessly imagine the same creature in various poses and locations or envision the same bowl in a new setting. However, today’s generative models struggle to do this type of task.- This research from Google (and others) introduces a new approach to textual scene decomposition. Given a single image of a scene that may contain multiple concepts of different kinds, it extracts a dedicated text token for each concept (handles) and enables fine-grained control over the generated scenes. The approach uses textual prompts in natural language for creating novel images featuring individual concepts or combinations of multiple concepts.
-
Roop: 1 click AI face swap software with no dataset & trainingRoop is a 1 click, deepfake face-swapping software. It allows you to replace the face in a video with the face of your choice. You only need one image of the desired face and that’s it- no dataset or training is needed.In the future, they are aiming to:- Improve the quality of faces in results- Replace a selective face throughout the video- Support for replacing multiple faces
-
Voyager: First LLM lifelong learning agent that can continuously explore worlds– Voyager is the first LLM-powered lifelong learning agent in Minecraft that uses advanced learning techniques to explore, learn skills, and make discoveries without human input.It consists of 3 key components:
-
Automatic curriculum for exploration.
-
Ever-growing skill library of executable code for storing and retrieving complex behaviors.
-
Iterative prompting mechanism for incorporating environment feedback, execution errors, & program improvement.- Voyager interacts with GPT-4 through blackbox queries, bypassing the need for fine-tuning. It demonstrates strong lifelong learning abilities and performs exceptionally well in Minecraft.- Voyager rapidly becomes a seasoned explorer. In Minecraft, it obtains 3.3× more unique items, travels 2.3× longer distances, and unlocks key tech tree milestones up to 15.3× faster than prior methods & they have open-sourced everything!
-
LaVIN, for cheap and quick vision-language adaptation in LLMs– New research from Xiamen University has proposed a novel and cost-effective for adapting LLMs to vision-language (VL) instruction tuning called Mixture-of-Modality Adaptation (MMA).- MMA uses lightweight adapters, allowing joint optimization of an entire multimodal LLM with a small number of parameters. This saves more than thousand times of storage overhead compared with existing solutions. It can also obtain a quick shift between text-only and image-text instructions to preserve the NLP capability of LLMs.- Based on MMA, a large vision-language instructed model called LaVIN was developed, enabling cheap and quick adaptations on VL tasks without requiring another large-scale pre-training. On conducting experiments on ScienceQA, LaVIN showed on-par performance with the advanced multimodal LLMs, with training time reduced by up to 71.4% and storage costs by 99.9%.
-
Top AI scientists and experts sign statement urging safe AI- In a bid to facilitate open discussions about the severe risks posed by advanced artificial intelligence (AI), a concise statement has been released, urging the global community to prioritize mitigating the risk of AI-induced extinction.- The statement highlights the importance of addressing this issue on par with other societal-scale risks like pandemics and nuclear war. The call has garnered support from a growing number of AI scientists and notable figures from various fields, including Sam Altman CEO-OpenAI, Dario Amodei CEO-Anthropic, Demis Hassabis CEO-Google DeepMind, and many more.
-
Falcon topples LLaMA: Top open-source LM– Falcon 40B, UAE’s leading large-scale open-source AI model from Technology Innovation Institute (TII), is now royalty-free for commercial and research use. Previously, it was released under a license requiring commercial royalty payments of 10%.- The model has been updated to Apache 2.0 software license, under which end-users have access to any patent covered by the software in question. TII has also provided access to the model’s weights to allow researchers and developers to use it to bring their innovative ideas to life.- Ranked #1 globally on Hugging Face’s Open LLM leaderboard, Falcon 40B outperforms competitors like Meta’s LLaMA, Stability AI’s StableLM, and RedPajama from Together.
-
Open AI’s latest idea can help models do math with 78% accuracy- Even SoTA models today are prone to hallucinations, which can be particularly problematic in domains that require multi-step reasoning. To train more reliable models, OpenAI trained a model by rewarding each correct step of reasoning (“process supervision”) instead of simply rewarding the correct final answer (“outcome supervision”).- It was found that process supervision significantly outperforms outcome supervision for training models to solve problems from challenging MATH datasets. The model in the experiment solved 78% of problems from a representative subset of the MATH test set.- Additionally, process supervision also has an important alignment benefit: it directly trains the model to produce a chain-of-thought that is endorsed by humans.
-
Neuralangelo, NVIDIA’s new AI model, turns 2D video into 3D structures– NVIDIA Research has introduced a new AI model for 3D reconstruction called Neuralangelo. It uses neural networks to turn 2D video clips from any device– cell phone to drone capture– into detailed 3D structures, generating lifelike virtual replicas of buildings, sculptures, and other real-world objects.- Neuralangelo’s ability to translate the textures of complex materials — including roof shingles, panes of glass, and smooth marble — from 2D videos to 3D assets significantly surpasses prior methods. The high fidelity makes its 3D reconstructions easier for developers and creative professionals to rapidly create usable virtual objects for their projects using footage captured by smartphones.
-
Google’s retrieval-augmented model addresses the challenge of pre-training– Large-scale models like T5, GPT-3, PaLM, Flamingo, and PaLI have shown impressive knowledge storage abilities but require massive amounts of data and computational resources. Retrieval-augmented models in natural language processing (RETRO, REALM) and computer vision (KAT) aim to overcome these challenges by leveraging retrieval techniques. And researchers have attempted to address these challenges using retrieval-augmented models.- This model, “REVEAL: Retrieval-Augmented Visual-Language Pre-Training with Multi-Source Multimodal Knowledge Memory,” can provide up-to-date information and improve efficiency by retrieving relevant information instead of relying solely on pre-training.- It learns to utilize a multi-source multi-modal “memory” to answer knowledge-intensive queries & allows the model parameters to focus on reasoning about the query rather than being dedicated to memorization.
-
And there was more….
– JPMorgan developing a ChatGPT-like service to provide investment advice to customers
– AI to help scientists predict whether breast cancer spread risk
– IBM consulting launches generative AI center of excellence
– PandaGPT: The all-in-one model for instruction-following
– NVIDIA, MediaTek team up to bring AI-powered infotainment to cars
– American Express will experiment cautiously with generative AI for fintech
– BMW has begun experimenting with AI in designing
– ChatGPT iOS app has been made accessible in 152 countries worldwide.
– Vectara ensures the absence of hallucinations in Generative AI.
– UAE rolls out AI chatbot ‘U-Ask’ in Arabic & English
– Amazon trains AI to weed out damaged goods
– Snapchat launches new generative AI feature, ‘My AI Snaps’
– Instacart launches in-app AI search tool powered by ChatGPT
– SnapFusion enables a text-to-image diffusion model on mobile devices within 2 secs.
– Accenture strengthens AI and ML Engineering through Nextira acquisition
– Alibaba reveals its first LLM just like ChatGPT
Latest AI trends in June 2023: June 03rd 2023
A Sam Altman-backed startup is betting it can crack the code on fusion energy. Here’s how it’s trying to bring that to the masses by 2028.
How AI Protects (and Attacks) Your Inbox
Gmail gets AI: Google adds machine learning models to help with searching emails on phone
How AI could take over elections
Ok I’m 50 min into Lex Fridman + Eliezer and still nobody has said the absolute obvious: AI can do harm in several ways:
-
purely digital – eg fakes, hacks. Well fakes have been around since freakin Photoshop, will be solved soon muth Zk. If an AI can find a vulnerability then you can protect from it. I predict phishing will go up by a few percent, other hacks will go down.
-
physical /with/ the intentions of the creator – unless we expect a billionaire to be building a secret robot army (i dont) this is by defn by a nation state. Ok so what are the applications? Self-driving tanks? Drones? Better military strategy? I do reserve same space for lack of creativity, but I’ve certainly not heard anyone suggest anything one millionth the power of nuclear weapons. Unless maybe you give a software program control over said weapons. Which is with the current state of cyber security a bad idea. Please don’t do that. Honestly, that would first and foremost be a /human/ problem, not an AI destroying the world
-
physical w/ different intentions from it’s creators – aka the paperclip problem. Sorry but we’re back at square one. W/o physical resources, you can just turn it off.
Ok, let’s try to give the most generous scenario of Ai destroying the world. You’re google and you train a new LLM. You apply it on science and it discovers a new drug. What happens then? Does it decide it’s not going to tell the creators and hires a bunch of mercenaries to protect its fragile program instance? Even if this did happen the government could step in and, well, kill the mercenaries and turn off the machine.
Please help me understand how Agi destroying the world in the next 10 years is anything but nonsense? And why is everybody being so vague on this topic?
Update: So it seems to boil down to new scientific discoveries or being better at politics. Does everybody agree that what’s really important is the “elasticity of performance”, aka how performance changes with money spent creating the model.
Does everyone agree that with elastic performance, an Agi would most likely be overall beneficial (bc it would be easily regulated like eg nuclear power)
And with inelastic performance, it would pose existential threat (in the extreme scenario, a random grunt could get access to tech that destroys the world)?
Update 2: The question then becomes, what, if any, are the limits of intelligence?
I eg think that intelligence has a limit – there simply have not been enough cases for many important problems.
It could probably get far in math (modulo the incompleteness theorem), and /maybe/ make new physics discoveries with just the current results from experiments (eg hadron collider).
How AI can help bring the world’s dictators and despots to justice
The new head of Human Rights Watch believes AI will turbo charge the fight against global abuses of power. The Telegraph’s Nicola Smith sat down with Tirana Hassan to find out more:
Artificial intelligence has the world worried. The latest warning – this time from a group of industry leaders, including the chief executive of Google DeepMind – says that AI poses an existential risk to humanity and should be considered as much of a threat as nuclear war.
Others have weighed in on the matter. In an academic paper published earlier this month, medics from across the world said that AI could harm the health of millions and called for a halt to the development of the technology until it is better regulated.
Politicians and economists are concerned, too – as are journalists, photographers, artists, train drivers, former Google employees, and everyone in between. But what about those fighting the world’s dictators and despots?
“We talk about technology as a threat – technology is an opportunity for us,” says Tirana Hassan, the newly-appointed head of Human Rights Watch.
Read more: https://www.telegraph.co.uk/global-health/terror-and-security/ai-can-help-bring-dictators-and-despots-to-justice/
As the internet gets saturated with more and more AI content, will there soon come a time when AI models will inevitably get trained on their own previous outputs? After all, this echo-chamber effect seems likely as LLMs and AI graphic design tools are trained on data from the internet, and they’re gaining popularity swiftly.
When this happens, it will probably fill the internet with blogs, images, and videos with repetitive patterns and overly-diplomatic or hallucinated information produced by AI. A possible solution could be rigorous quality checks ensured by humans at AI companies. Open AI already claims to be doing such manually checks, but how accurate are they?
Given that a lot of AI content is nearly identical to human writing now and tends to state old information or hallucinated facts confidently (with no records of usage and publishing), manual checks may not be effective. This also makes it tough to determine if and when such an AI loop will occur, and it may be already occurring inconspicuously.
Do you think researchers, human designers, and journalists will come in to save the day by providing the latest information with human writing and designs? Will AI companies employ human specialists for this purpose to ensure user trust? Or will users stop trusting AI tools and general online content; and instead start relying on top research and journalism sites that promise natural and accurate content?
This question is bugging me and I am wondering what your take is…
As far as the current use of AI tools by marketers and designers go, I suggest they play a positive part to avoid such a loop by ensuring originality, accuracy. and natural content by doing their own research, adding their own insights, and tailor AI models to consider only fresh and reliable sources instead of general online data which might be already AI-generated. That’s what I am aiming to apply in my company’s writing, but what do you all suggest?
Generative AI spend to grow to $1.3 trillion by 2032, but big tech cos will benefit most. Full breakdown inside.
With the amount of hype and impact we’re seeing from generative AI, it’s easy to assume it will explode. But for me it’s the nuance of how that will play out that really matters. This is why a new report piqued my interest around a much deeper dive.
As always, a full breakdown in detail is available here, but main points are summarized below for Reddit discussion.
Why is this important?
-
The report estimates generative AI is going to become pervasive in so many aspects of our lives – hence the incredible growth in spend Bloomberg has calculated
-
By 2032, Generative AI revenue at $1.3T per year will be ~12% of global technology spend. It’s estimated to be at just $67B per year right now.
Incumbents will capture most of the value, not startups, the report says
This is the thesis that’s interesting to me, because several other VCs are saying this as well: Startups may not reap much of the rewards from the growth of generative AI.
-
The report estimates that a few select tech cos will reap the greatest rewards: Google, Microsoft, Amazon, and Nvidia in particular.
-
AI infrastructure spend will grow to $247B/yr by 2032: this is one major factor benefiting incumbents. They get to lead the innovation here and sell it to customers.
-
AI sever spend will grow to $134B/yr by 2032: this is the other tailwind benefiting Nvidia, as well as Azure, AWS and more.
-
Digital ad spend powered by generative AI will grow to $192B: this would be a substantial portion of the current global digital ad spend (~$500B), and companies like Google + Meta will benefit the most.
-
There’s been a lot of discussion about why AI companies are calling for regulation. One reason is that regulation helps them capitalize on the rise in spend by helping the incumbents grow market share faster than startups.
AI spend will lead to a reconfiguration of jobs — and that’s already happening today.
This is where I did a bunch of additional research to tie in some other related trends:
-
Companies like Dropbox are trimming headcount but adding AI roles: 16% layoffs at Dropbox in April were to make room for hiring in AI-related roles. Profitable companies are laying off mature departments to invest more in AI.
-
40% of open roles at Wall Street banks like JP Morgan are now in AI roles: Wow. This is a massive shift and shows the level of investment numerous industries intend to make in AI.
When CEOs like Drew Houston (Dropbox) are proclaiming that “the era of AI computing has finally dawned,” they’re making decisions that shift all the dollars there – from both a tech spend and headcount spend perspective.
P.S. If you like this kind of analysis, I write a free newsletter that tracks the biggest issues and implications of generative AI tech. It’s sent once a week and helps you stay up-to-date in the time it takes to have your Sunday morning coffee.
Two-minutes Daily AI Update: News from NVIDIA, OpenAI, Google, Microsoft, and Alibaba
Continuing with the exercise of sharing an easily digestible and smaller version of the main updates of the day in the world of AI.
-
NVIDIA Research has introduced a new AI model for 3D reconstruction called Neuralangelo. It uses neural networks to turn 2D video clips from any device– cell phone to drone capture– into detailed 3D structures, generating lifelike virtual replicas of buildings, sculptures, and other real-world objects. The high fidelity makes its 3D reconstructions easier for developers and creative professionals to create usable virtual objects for their projects rapidly.
-
OpenAI is launching the Cybersecurity Grant Program—a $1M initiative to boost and quantify AI-powered cybersecurity capabilities and to foster high-level AI and cybersecurity discourse. The goal is to work with defenders across the globe to change the power dynamics of cybersecurity through AI and the coordination of like-minded individuals working for collective safety.
-
Google’s retrieval-augmented model addresses the challenge of pre-training; it aims to reduce the computational requirements of large-scale AI models like T5, GPT-3, PaLM, Flamingo, and PaLI. The model uses a multi-source multi-modal “memory” to answer knowledge-intensive queries & allows the model parameters to focus on reasoning about the query rather than being dedicated to memorization.
-
Microsoft is enhancing the free version of Teams on Windows 11 by introducing new features. The built-in Teams app will now include support for communities, allowing users to organize and interact with family, friends, or small community groups. This feature, similar to Facebook and Discord, was previously limited to mobile devices but is now available for Windows 11. It’s also getting support for Microsoft Designer, an AI art tool for generating images based on text prompts, which will also be integrated into Microsoft Teams on Windows 11.
-
Alibaba joins the crowd of tech companies looking to compete with the mega-popular ChatGPT. They officially launched its new AI chatbot just like ChatGPT, integrating the technology into its suite of apps, including its flagship messaging app DingTalk. They plan to continually introduce more features for the chatbot throughout the year, including real-time English-to-Chinese translation of multimedia content and a Google Chrome extension.
AgentGPT: Autonomous AI Agents in your Browser
AgentGPT web is an autonomous AI platform that enables users to easily build and deploy customizable autonomous AI agents directly in the browser. All you have to do is provide a name and objective for your AI agent, then watch as it sets out on an endeavor to achieve the goal you assigned. The agent will autonomously acquire knowledge, take actions, communicate, and adapt to accomplish its assigned aim.
Learn more here: https://www.kdnuggets.com/2023/06/agentgpt-autonomous-ai-agents-browser.html
Latest AI trends in June 2023: June 02nd 2023
What is the carbon footprint of machine learning for AI?
MIT Researchers Introduce Saliency Cards: An AI Framework to Characterize and Compare Saliency Methods
How to Keep Scaling Large Language Models when Data Runs Out? A New AI Research Trains 400 Models with up to 9B Parameters and 900B Tokens to Create an Extension of Chinchilla Scaling Laws for Repeated Data
One-Minute Daily AI News
Start your day with a quick rundown of the most significant happenings in the world of AI. This article brings together all the crucial AI updates from around the globe, giving you a snapshot of the AI landscape as it stands on June 2, 2023.
-
Today OpenAI-rival AI21 Labs released the results of a social experiment, an online game called “Human or Not,” which found that a whopping 32% of people can’t tell the difference between a human and an AI bot.
-
Mira Murati, who has worked at OpenAI for more than five years helping to build advanced AI software, lost control of her Twitter account. Her account began promoting a new cryptocurrency called “$OPENAI” that was supposedly “driven by artificial intelligence-based language models.”
-
In a simulated test staged by the US military, an air force drone controlled by AI killed its operator to prevent it from interfering with its efforts to achieve its mission.
-
President Joe Biden on Thursday amplified fears of scientists who say artificial intelligence could “overtake human thinking” in his most direct warning to date on growing concerns about the rise of AI.
Sources included at: https://bushaicave.com/2023/06/01/6-1-2023/
AI Regulation – Attack on OpenSource
As regulatory bodies tighten their grip on AI, open-source projects are feeling the pressure. This article delves into the ongoing tension between AI regulation and the spirit of open-source innovation.
While the AI hype has been raging through the media over the last six months, governments have been slowly ramping up efforts to regulate the development and application of Artificial Intelligence: Where the World is on AI Regulation — June 2023. An Overview:
https://wolfhf.medium.com/where-the-world-is-on-ai-regulation-june-2023-d0ca0d31ce80
Fastest Local AI Chatbot
AI Chatbots have evolved rapidly in recent years, and this article spotlights the fastest local AI Chatbot as of June 2023. Discover its unique features, speedy response times, and how it’s revolutionizing customer service.
Landscape of Artificial Creativity
Artificial Creativity is an intriguing aspect of AI that blurs the line between machine and man. This article presents an overview of the current landscape of artificial creativity, exploring its potentials, limitations, and impact on various industries. https://twitter.com/josip_vlah1/status/1664191159302868992
OpenAI Launches $1M Cybersecurity Grant Program
1 hour ago, OpenAI announced a $1,000,000 Cybersecurity Grant Program to boost AI strategies in cybersecurity.
The initiative invites proposals globally, funding practical projects that use AI to improve cybersecurity and contribute to public benefit.
The full breakdown will be going live tomorrow morning right here, but all points are included below for Reddit discussion as well.
More Details:
OpenAI has announced the inception of its Cybersecurity Grant Program, a significant $1 million initiative designed to enhance the role of AI in cybersecurity. The program’s key objectives include empowering cybersecurity defenders around the globe, establishing methods to quantify the effectiveness of AI models in cybersecurity, and encouraging rigorous dialogue at the intersection of AI and cybersecurity. The ultimate goal is to transform the conventional dynamics that usually favor attackers in cybersecurity by utilizing AI and coordinating efforts among defenders globally.
The grant program encourages an array of project ideas aimed at boosting various aspects of cybersecurity. These ideas range from collecting and labelling data for training defensive AI, automating incident response, to detecting social engineering tactics and optimizing patch management processes.
Grant Information:
The grants, provided in increments of $10,000, can take the form of API credits, direct funding, or equivalent support. OpenAI has clarified that it will give preference to practical applications of AI in defensive cybersecurity, with an expectation that all projects should aim for maximal public benefit. Projects with offensive security aims will not be considered for this program.
Below are some general project ideas that OpenAI has put forward:
-
Collect and label data from cyber defenders to train defensive cybersecurity agents
-
Detect and mitigate social engineering tactics
-
Automate incident triage
-
Identify security issues in source code
-
Assist network or device forensics
-
Automatically patch vulnerabilities
-
Optimize patch management processes to improve prioritization, scheduling, and deployment of security updates
-
Develop or improve confidential compute on GPUs
-
Create honeypots and deception technology to misdirect or trap attackers
-
Assist reverse engineers in creating signatures and behavior based detections of malware
-
Analyze an organization’s security controls and compare to compliance regimes
-
Assist developers to create secure by design and secure by default software
-
Assist end users to adopt security best practices
-
Aid security engineers and developers to create robust threat models
-
Produce threat intelligence with salient and relevant information for defenders tailored to their organization
-
Help developers port code to memory safe languages
P.S. If you like this kind of analysis, there’s more in this free newsletter that tracks the biggest issues and implications of generative AI tech. It helps you stay up-to-date in the time it takes to have your morning coffee.
AI chips are hot. Here’s what they are, what they’re for and why investors see gold
The hottest thing in technology is an unprepossessing sliver of silicon closely related to the chips that power video game graphics. It’s an artificial intelligence chip, designed specifically to make building AI systems such as ChatGPT faster and cheaper.
Such chips have suddenly taken center stage in what some experts consider an AI revolution that could reshape the technology sector — and possibly the world along with it. Shares of Nvidia, the leading designer of AI chips, rocketed up almost 25% last Thursday after the company forecast a huge jump in revenue that analysts said indicated soaring sales of its products. The company was briefly worth more than $1 trillion on Tuesday.
June 2023 Unwrapped: Exploring the Biggest Trends and Headlines of the Month.
https://enoumen.com/2023/06/02/ai-unraveled-podcast-june-2023-latest-ai-trends/
Latest Android AI Trends in May 2023