Azure AI Fundamentals AI-900 Exam Preparation

Azure AI Fundamentals AI-900 Exam Prep PRO

You can translate the content of this page by selecting a language in the select box.

Azure AI Fundamentals AI-900 Exam Preparation: Azure AI 900 is an opportunity to demonstrate knowledge of common ML and AI workloads and how to implement them on Azure. This exam is intended for candidates with both technical and non-technical backgrounds. Data science and software engineering experience are not required; however, some general programming knowledge or experience would be beneficial.

Azure AI Fundamentals can be used to prepare for other Azure role-based certifications like Azure Data Scientist Associate or Azure AI Engineer Associate, but it’s not a prerequisite for any of them.

This Azure AI Fundamentals AI-900 Exam Preparation App provides Basics and Advanced Machine Learning Quizzes and Practice Exams on Azure, Azure Machine Learning Job Interviews Questions and Answers, Machine Learning Cheat Sheets.

Download Azure AI 900 on iOs

Download Azure AI 900 on Windows10/11

Azure AI Fundamentals Exam Prep

Azure AI Fundamentals AI-900 Exam Preparation App Features:

– Azure AI-900 Questions and Detailed Answers and References

– Machine Learning Basics Questions and Answers

– Machine Learning Advanced Questions and Answers

– NLP and Computer Vision Questions and Answers

– Scorecard

– Countdown timer

– Machine Learning Cheat Sheets

– Machine Learning Interview Questions and Answers

– Machine Learning Latest News

Azure AI Fundamentals AI-900 Exam Preparation
Azure AI 900 – Machine Learning

This Azure AI Fundamentals AI-900 Exam Prep App covers:

  • ML implementation and Operations,
  • Describe Artificial Intelligence workloads and considerations,
  • Describe fundamental principles of machine learning on Azure,
  • Describe features of computer vision workloads on Azure,
  • Describe features of Natural Language Processing (NLP) workloads on Azure ,
  • Describe features of conversational AI workloads on Azure,
  • QnA Maker service, Language Understanding service (LUIS), Speech service, Translator Text service, Form Recognizer service, Face service, Custom Vision service, Computer Vision service, facial detection, facial recognition, and facial analysis solutions, optical character recognition solutions, object detection solutions, image classification solutions, azure Machine Learning designer, automated ML UI, conversational AI workloads, anomaly detection workloads, forecasting workloads identify features of anomaly detection work, Kafka, SQl, NoSQL, Python, linear regression, logistic regression, Sampling, dataset, statistical interaction, selection bias, non-Gaussian distribution, bias-variance trade-off, Normal Distribution, correlation and covariance, Point Estimates and Confidence Interval, A/B Testing, p-value, statistical power of sensitivity, over-fitting and under-fitting, regularization, Law of Large Numbers, Confounding Variables, Survivorship Bias, univariate, bivariate and multivariate, Resampling, ROC curve, TF/IDF vectorization, Cluster Sampling, etc.
  • This App can help you:
  • – Identify features of common AI workloads
  • – identify prediction/forecasting workloads
  • – identify features of anomaly detection workloads
  • – identify computer vision workloads
  • – identify natural language processing or knowledge mining workloads
  • – identify conversational AI workloads
  • – Identify guiding principles for responsible AI
  • – describe considerations for fairness in an AI solution
  • – describe considerations for reliability and safety in an AI solution
  • – describe considerations for privacy and security in an AI solution
  • – describe considerations for inclusiveness in an AI solution
  • – describe considerations for transparency in an AI solution
  • – describe considerations for accountability in an AI solution
  • – Identify common types of computer vision solution:
  • – Identify Azure tools and services for computer vision tasks
  • – identify features and uses for key phrase extraction
  • – identify features and uses for entity recognition
  • – identify features and uses for sentiment analysis
  • – identify features and uses for language modeling
  • – identify features and uses for speech recognition and synthesis
  • – identify features and uses for translation
  • – identify capabilities of the Text Analytics service
  • – identify capabilities of the Language Understanding service (LUIS)
  • – etc.

Download Azure AI 900 on iOs

Download Azure AI 900 on Windows10/11

Azure AI Fundamentals Breaking News – Azure AI Fundamentals Certifications Testimonials

  • Transfer domain name from Azure DNS to CloudFlare.
    by /u/hamza0419 (Microsoft Azure) on April 24, 2024 at 2:47 pm

    I have a domain name on Azure DNS, and I want to change the primary name servers for my domain to be managed by Cloudflare? Is it possible or not??? submitted by /u/hamza0419 [link] [comments]

  • Azure Function Core Tools doesn't like Python 3.11.9
    by /u/jbudemy (Microsoft Azure) on April 24, 2024 at 2:42 pm

    I'm on Windows 10 Pro. I installed Azure Function Core Tools 4.0.5571 about a month ago for another tutorial I'm doing. Function Core Tools should support Python 3.6 through 3.11. I'm doing a new tutorial no Udemy but I cannot send a question to them and I don't get an error. I use Google Chrome v124.0.6367 to access Udemy. I have a question for that problem in r/Udemy. I installed Python 3.11.9 because my Azure Function tools only supports that, not Python 3.12 which I also have. I made my new project in Pycharm and based it off Python 3.11.9. Inside my project I can do "func init" and "func new" just fine. When I do "func start" I get this error. ``` C:\Users\USER\AzureProjects\AzFuncDataDriven\ch08Hello-b>func start Could not find a Python version. Python 3.6.x, 3.7.x, 3.8.x, 3.9.x, 3.10.x or 3.11.x is recommended, and used in Azure Functions. C:\Users\USER\AzureProjects\AzFuncDataDriven\ch08Hello-b>python -V Python 3.11.9 ``` Anyone know how to fix this? I already have paths to Python 3.11.9 set up, nothing is intercepting Python pathwise as indicated by my test "python -V". So it seems Azure Func Tools does not recognize Python 3.11.9. EDIT: Doesn't Reddit support the three backtick code fence in Markdown? submitted by /u/jbudemy [link] [comments]

  • Need suggestions on selecting Azure services architecture for a Web application restricted to a Corporate network only
    by /u/lakshaydulani (Microsoft Azure) on April 24, 2024 at 2:39 pm

    I have a simple web application hosted on App service, which utilizes other services on Azure. I need to restrict the application to a Corporate network. The company I am working with has put all of the services inside a Virtual network. And used a Front door to do the domain binding. Now the application is supposed to be restricted to a Corporate network. The company has asked us to remove Front door. Shall I be making use of Application gateway in this case? submitted by /u/lakshaydulani [link] [comments]

  • Wondering if someone could assist me with Azure Storage Explorer? (Question in post)
    by /u/SammMoney (Microsoft Azure) on April 24, 2024 at 2:11 pm

    My team uses it to upload large amounts of data hundreds of Gigs at a time to Blob storage. Lately, it just stops uploading or downloading and just hangs. No error, it acts like it is still trying but nothing happens. We've used it for about a year and this has just started happening. I don't know much about the blob storage itself so if there is anything I can provide you to help you help me please let me know. I do not have Admin rights on my machines or the Blob so I have to work through IT which can take a bit of time to respond. So far, we've uninstalled/reinstalled ASE, deleted old logs, cleared some of the cache folders. Updated to the latest version. Is there a size limit on the blob containers themselves, not the size of the files in the blob? Maybe we're topping out there? submitted by /u/SammMoney [link] [comments]

  • Serverless spark compute in Notebook not starting in azure machine learning workspace.
    by /u/Altruistic_Fun648 (Microsoft Azure) on April 24, 2024 at 2:07 pm

    Hi, I have been trying to run Serverless spark compute in the notebook in azure machine learning workspace. Previously when i used to run the job it started with the default configuration and no issues with it. Now i am not able to run the jobs because spark compute is not starting. I have checked for any configuration changes and access permissions to storage accounts and other dependency resources. Did anybody faced this issue and could able to find a solution for it? Because creating a new workspace and launching the notebook it is not a solution for me. submitted by /u/Altruistic_Fun648 [link] [comments]

  • New managed disk property to let you know last time its ownership was changed.
    by /u/JohnSavill (Microsoft Azure) on April 24, 2024 at 1:50 pm

    Quick video looking at a new property on managed disks that tells you the last time their ownership was changed, e.g. attached/detached from a VM to help you more confidently know a disk is not being used and may be a candidate for cleanup to optimize your environment and spend. https://youtu.be/e8lMXxHgv-E 00:00 - Introduction 00:16 - Managed disk states 02:07 - Disk state changes 02:27 - LastOwnershipUpdateTime 03:55 - Steps and governance 05:03 - Viewing the state and update time 07:07 - Summary submitted by /u/JohnSavill [link] [comments]

  • New Austria region, any news?
    by /u/nico282 (Microsoft Azure) on April 24, 2024 at 1:29 pm

    ​ https://preview.redd.it/v4l6kn4qifwc1.png?width=885&format=png&auto=webp&s=5a6b96bc2b9be9a81f20d500c28c0de79cc730b1 I found the announcement for a new Azure region in Austria, but no updates since 2020. Do anyone has some recent news? Is it planned open in 2024? submitted by /u/nico282 [link] [comments]

  • Matlab Runtime Compiler
    by /u/Arvinth_4 (Microsoft Azure) on April 24, 2024 at 1:02 pm

    I am wondering if it possible to run the compiler in CI pipeline with an image cause I have some packages that compiled by Matlab Runtime Compiler and I want to run some tests with Pytest in my pipeline submitted by /u/Arvinth_4 [link] [comments]

  • How do I create a proper PowerShell Function App?
    by /u/Dultus (Microsoft Azure) on April 24, 2024 at 12:37 pm

    Hello, I've created an azure function and added the dependencies in the files. However trying to test my HttpTrigger even leads me to believe that simple stuff like ConvertFrom-Json doesn't work and I don't know why. I input a settings.json as body. I output the data and checked if I did the HTTP POST call correctly and it does respond with the input json if I put it in the body. Doing something simple like $test = ConvertFrom-JSON $request.body doesn't work either - or at least I don't get any ouput when I put it into the body to return. I tried running my script using the app service editor but that didn't work because the PowerShell version is on 5.x on there instead of the required 7.2. Even creating a file with out-file and reading it via get-content didn't work. Any common mistakes or oversights? submitted by /u/Dultus [link] [comments]

  • EA to CSP
    by /u/Total-Law4620 (Microsoft Azure) on April 24, 2024 at 11:55 am

    I work for a large CSP, I'm noticing a substantial number of requests from clients to quote on migrating them off their existing EA to our CSP. Naturally we can't give them the discounts they currently receive, and in some cases I have clients asking for an intermediary migration. Move em now, take them back to EA in 6 months. Anyone else getting this lately? Also I'm not familiar with the process but it seems like an awful lot of work for 6 months. Anyone familiar with the effort involved? submitted by /u/Total-Law4620 [link] [comments]

  • #azure getting this error? Any solution?!
    by /u/idakhere (Microsoft Azure) on April 24, 2024 at 11:38 am

    submitted by /u/idakhere [link] [comments]

  • Azure Bastion Host Configuration
    by /u/lfsdriver (Microsoft Azure) on April 24, 2024 at 11:04 am

    Hi all, ​ New to all things Azure, and currently looking at configuring an Azure Bastion Host within our tenant for secure access to our VMs. However, we have multiple subscriptions. Will I need to configure a Bastion Host for each subscription? ​ Apologies if this is a super noob question, but just trying to understand more about Azure! ​ Cheers! submitted by /u/lfsdriver [link] [comments]

  • Data warehouse - single vs multiple?
    by /u/scan-horizon (Microsoft Azure) on April 24, 2024 at 10:48 am

    Single vs many Data Warehouses… I work at a company with 100s staff spread across 5 core departments. We have a presence in Azure and MS Entra. We are currently in the midst of a discussion around using a single DW for the entire company, or multiple DWs with each serving the specific needs of each department and teams within. The DWs would be used for staff to query data in their business area (so it’s an OLAP dw). The concept of a DW is not defined internally (a single SQL db could be classed as a DW technically…). We currently have a complex DW (Azure data factory + Azure Databricks) which processes and serves travel and transport data to internal users, external users from other businesses, and also the data is available to the general public. A different department wants a DW solution to store their data which is highly sensitive / personal. None of this data relates to the data in the transport DW. The solution would be an Azure data factory moving data between SQL databases, and interfaces with Power Apps/BI apps. Only select users with security clearances can ever see the data stored here. This DW was built by a big team of external developers and although hand over documentation exists, I don’t have a sense of ownership over it. Other departments now want DWs to process and store their data (for example, a place to keep all our workstreams, projects, and client info across the company). So lots of different use cases, but all requiring some kind of DW to take data from various sources, transform it, and load it into databases/sink ready for consumption by internal and external users. I’m suggesting each business unit should have their own DW solution to support the nuance of their area, whereas my bosses are suggesting a single solution so we don’t have loads of different DWs all over the place. Any thoughts on the best approach to take from a data engineering / cloud architecture stance? We already have a prod, test, dev Subscription/Vnet setup in Azure, so could deploy a single ADF instance in each for the entire company, which would ETL to isolated databases per business unit. Alternatively should each business area have their own x3 ADF (prod test dev). Keep in mind we’d want a solution with minimal billing/administrative overhead as our internal tech team is tiny, often supported by external (expensive) partners. Thanks submitted by /u/scan-horizon [link] [comments]

  • Find power I data storage
    by /u/Alternative_Owl7561 (Microsoft Azure) on April 24, 2024 at 10:41 am

    I am trying to figure out where our PowerBi data is stored. It should be on the standard location which is Azure Blob, but how do I access that? I checked Asure Storage Accounts, but there are no accounts. Also when I go to monitor > Storage accounts it does not show up anything. What am I missing? submitted by /u/Alternative_Owl7561 [link] [comments]

  • Move from Active Directory to Cloud-only renamed all local administrator accounts
    by /u/pan_cage (Microsoft Azure) on April 24, 2024 at 10:01 am

    So today we disabled the sync to our on prem Active Directory by disabling all sync services. Entra automatically converts all users and groups to Cloud objects. All devices are Entra joined, no hybrid joins. All worked well so far but soon we found out that all local administrator accounts under "System settings > Accounts > Other users" were renamed. Before the switch, they were COMPANY\firstname.lastname, now we have AzureAD\FirstnameLastname. Many employees scripts and development environments don't work now. Anyone know an easy fix for this? submitted by /u/pan_cage [link] [comments]

  • FILIPINO TAKERS, LIST OF VALID IDS
    by /u/Long_Live_Japan (Microsoft Azure Certifications) on April 24, 2024 at 9:13 am

    Is philhealth a valid ID for MS Certification? Thank you. submitted by /u/Long_Live_Japan [link] [comments]

  • I must migrate from EA to CSP. As a non-tech person. How screwd am I?
    by /u/lllGreyfoxlll (Microsoft Azure) on April 24, 2024 at 9:12 am

    I do hope I'm in a right place, apologies if not. i'm dealing with an MSP that needs to help us move away from EA to their CSP. I have access to an account (as in billing account > department > account) but, when trying to accept the transfer link, cannot see the correct enrollment. The subs don't show up. From what I understand, that means we have a permission issue. But after hours of reading through documentation I can't figure out if the problem is that I'm not EA Admin (how do I check that, even ?) or Billing account owner. For the record, I'm GA on the tenant and can see the account scope but not the biilling account. Anyone's been through there already ? submitted by /u/lllGreyfoxlll [link] [comments]

  • AZ-104 or MD-102?
    by /u/agentmulder69 (Microsoft Azure Certifications) on April 24, 2024 at 8:32 am

    Hi, over the past few months I've passed the AZ, MS and SC 900 exams and I'm looking to do something a bit more heavy now. AZ-104 was going to be my next however the company I work for barely touches Azure though does use O365 and is about to kick off a migration to intune which makes me think the MD-102 might be more applicable for me? I've basically just been looking into what exams are for/cover intune and the MD-102 although not intune exclusive seems to be the best and only match? And from what I understand it's relatively new replacing two older exams? 100 and 101? I also can't work out if MD-102 is part of an exam path, it's name is Endpoint Administrator Associate - is there another that follows for say expert? submitted by /u/agentmulder69 [link] [comments]

  • Secure APIs (APIM) using client certificate authentication (mTLS) exposed through Application Gateway
    by /u/grator57 (Microsoft Azure) on April 24, 2024 at 8:28 am

    Hello all, I am struggling to set up mTLS on APIM when it's exposed through an App Gateway. I have read some MS docs : https://learn.microsoft.com/en-us/azure/application-gateway/mutual-authentication-overview?tabs=powershell https://learn.microsoft.com/en-us/azure/api-management/api-management-howto-mutual-certificates-for-clients If I understood correctly : If the APIM is exposed through the App Gtw, it is impossible to use the "classic" policy of APIM (since the client cert object does not exists after the App Gtw) ? Policies like this one (doc here): <validate-client-certificate validate-revocation="true | false" validate-trust="true | false" validate-not-before="true | false" validate-not-after="true | false" ignore-error="true | false"> <identities> <identity thumbprint="certificate thumbprint" serial-number="certificate serial number" common-name="certificate common name" subject="certificate subject string" dns-name="certificate DNS name" issuer-subject="certificate issuer" issuer-thumbprint="certificate issuer thumbprint" issuer-certificate-id="certificate identifier"/> </identities> </validate-client-certificate> Instead I need to use context variables to validate certificate, is this right ? Is it compulsory to set up mTLS between the client and the App Gateway. I mean if a client create a requests with a certificate, am I able to rewrite the App Gtw requests with this certificate without any verification in the App Gtw itself ? Thanks for your help! grator57 submitted by /u/grator57 [link] [comments]

  • Trying to have multiple repos/actions on container app deploy to independant containers
    by /u/WraytheZ (Microsoft Azure) on April 24, 2024 at 7:19 am

    Hi All, Scratching our heads on this one. We've got a few container repos on github that we're trying to set up CD pipelines to deploy to seperate containers on an azure container app instance - however, when we deploy, it overwrites the starter hello world template and doesn't set up a seperate. Everything i've found thus far on the internet involves bicep and a shared directory structure which is not what we're after - we need to isolate microservices from eachother in the build/deployments. Has anyone managed to make this work with GH Actions? steps: - name: Checkout uses: actions/checkout@v4 - name: Log in to Azure uses: azure/login@v1 with: creds: ${{ secrets.AZURE_CREDENTIALS }} # Likely do not need the below step if the above az login works. - name: Docker Login uses: azure/docker-login@v1 with: login-server: ${{ secrets.AZURE_ACR_URL }} username: ${{ secrets.AZURE_ACR_USERNAME }} password: ${{ secrets.AZURE_ACR_PASSWORD }} - name: Build and Push to ACR uses: docker/build-push-action@v2 with: push: true tags: XXXXXXXX.azurecr.io/${{ secrets.IMAGE_NAME }}:latest #:${{ github.sha }} file: Dockerfile - name: Log in to ACR run: | az acr login --name XXXXXXXX - name: Deploy to Azure Container Apps uses: azure/container-apps-deploy-action@v1 with: imageToDeploy: XXXXXXXX/${{ secrets.IMAGE_NAME }}:${{ github.sha }} # imageToBuild: XXXXXXXX/${{ secrets.IMAGE_NAME }}:latest containerAppName: ca-XXXXXXXX-01 resourceGroup: rg-XXXXXXXX acrName: XXXXXXXX # acrUsername: ${{ secrets.AZURE_ACR_USERNAME }} # acrPassword: ${{ secrets.AZURE_ACR_PASSWORD }} submitted by /u/WraytheZ [link] [comments]

Download Azure AI 900 on iOs

Download Azure AI 900 on Windows10/11

AWS Machine Learning Certification Specialty Exam Prep

AWS Machine Learning Specialty Certification Prep (Android)

You can translate the content of this page by selecting a language in the select box.

The AWS Certified Machine Learning Specialty validates expertise in building, training, tuning, and deploying machine learning (ML) models on AWS.

Use this App to learn about Machine Learning on AWS and prepare for the AWS Machine Learning Specialty Certification MLS-C01.

Download AWS machine Learning Specialty Exam Prep App on iOs

Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon

AWS MLS-C01 Machine Learning Specialty Exam Prep PRO

[appbox appstore 1611045854-iphone screenshots]

[appbox microsoftstore  9n8rl80hvm4t-mobile screenshots]

AWS machine learning certification prep
AWS machine learning certification prep

Download AWS machine Learning Specialty Exam Prep App on iOs

Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon

The App provides hundreds of quizzes and practice exam about:

– Machine Learning Operation on AWS

– Modelling

– Data Engineering

– Computer Vision,

– Exploratory Data Analysis,

– ML implementation & Operations

– Machine Learning Basics Questions and Answers

– Machine Learning Advanced Questions and Answers

– Scorecard

– Countdown timer

– Machine Learning Cheat Sheets

– Machine Learning Interview Questions and Answers

– Machine Learning Latest News

The App covers Machine Learning Basics and Advanced topics including: NLP, Computer Vision, Python, linear regression, logistic regression, Sampling, dataset, statistical interaction, selection bias, non-Gaussian distribution, bias-variance trade-off, Normal Distribution, correlation and covariance, Point Estimates and Confidence Interval, A/B Testing, p-value, statistical power of sensitivity, over-fitting and under-fitting, regularization, Law of Large Numbers, Confounding Variables, Survivorship Bias, univariate, bivariate and multivariate, Resampling, ROC curve, TF/IDF vectorization, Cluster Sampling, etc.

Domain 1: Data Engineering

Create data repositories for machine learning.

Identify data sources (e.g., content and location, primary sources such as user data)

Determine storage mediums (e.g., DB, Data Lake, S3, EFS, EBS)

Identify and implement a data ingestion solution.

Data job styles/types (batch load, streaming)

Data ingestion pipelines (Batch-based ML workloads and streaming-based ML workloads), etc.

Domain 2: Exploratory Data Analysis

Sanitize and prepare data for modeling.

Perform feature engineering.

Analyze and visualize data for machine learning.

Domain 3: Modeling

Frame business problems as machine learning problems.

Select the appropriate model(s) for a given machine learning problem.

Train machine learning models.

Perform hyperparameter optimization.

Evaluate machine learning models.

Domain 4: Machine Learning Implementation and Operations

Build machine learning solutions for performance, availability, scalability, resiliency, and fault

tolerance.

Recommend and implement the appropriate machine learning services and features for a given

problem.

Apply basic AWS security practices to machine learning solutions.

Deploy and operationalize machine learning solutions.

Machine Learning Services covered:

Amazon Comprehend

AWS Deep Learning AMIs (DLAMI)

AWS DeepLens

Amazon Forecast

Amazon Fraud Detector

Amazon Lex

Amazon Polly

Amazon Rekognition

Amazon SageMaker

Amazon Textract

Amazon Transcribe

Amazon Translate

Other Services and topics covered are:

Ingestion/Collection

Processing/ETL

Data analysis/visualization

Model training

Model deployment/inference

Operational

AWS ML application services

Language relevant to ML (for example, Python, Java, Scala, R, SQL)

Notebooks and integrated development environments (IDEs),

S3, SageMaker, Kinesis, Lake Formation, Athena, Kibana, Redshift, Textract, EMR, Glue, SageMaker, CSV, JSON, IMG, parquet or databases, Amazon Athena

Amazon EC2, Amazon Elastic Container Registry (Amazon ECR), Amazon Elastic Container Service, Amazon Elastic Kubernetes Service , Amazon Redshift

Important: To succeed with the real exam, do not memorize the answers in this app. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Note and disclaimer: We are not affiliated with Microsoft or Azure or Google or Amazon. The questions are put together based on the certification study guide and materials available online. The questions in this app should help you pass the exam but it is not guaranteed. We are not responsible for any exam you did not pass.

Download AWS machine Learning Specialty Exam Prep App on iOs

Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon

  • [R] Recommendation System Approach
    by /u/sigh_k (Machine Learning) on April 24, 2024 at 2:34 pm

    Hello everyone, I am currently developing a recommendation system aimed at suggesting previously logged foods to users. The goal is to make meal logging simpler and more intuitive by leveraging past data. Here are some constraints and specifications of the system: Constraints: The system will only recommend foods that the user has previously logged. It needs to handle food logging both at the end of the day and throughout the day. The initial dataset available will start with 0 and the model will grow to each users. Parameters: Time of day when foods are logged. I am looking for insights on which models might be best suited for this task. If you could provide insight, that would be great. If you are curious what startup, https://wefit.ai. Thanks! submitted by /u/sigh_k [link] [comments]

  • [R] I made an app to predict ICML paper acceptance from reviews
    by /u/Lavishness-Mission (Machine Learning) on April 24, 2024 at 12:23 pm

    https://www.norange.io/projects/paper_scorer/ A couple of years ago, u/programmerChilli analyzed ICLR 2019 reviews data and trained a model that rather accurately predicted acceptance results for NeurIPS. I've decided to continue this analysis and trained a model (total ~6000 parameters) on newer NeurIPS reviews, which has twice as many reviews compared to ICLR 2019. Additionally, review scores system for NeurIPS has changed since 2019, and here is what I've learned: 1) Both conferences consistently reject nearly all submissions scoring <5 and accept those scoring >6. The most common score among accepted papers is 6. An average rating around 5.3 typically results in decisions that could go either way for both ICML and NeurIPS, suggesting that ~5.3 might be considered a soft threshold for acceptance. 2) Confidence scores are less impactful for borderline ratings such as 4 (borderline reject), 5 (borderline accept), and 6 (weak accept), but they can significantly affect the outcome for stronger reject or accept cases. For instance, with ratings of [3, 5, 6] and confidences of [*, 4, 4], changing the "Reject" confidence from 5 to 1 shifts the probabilities from 26.2% - 31.3% - 52.4% - 54.5% - 60.4%, indicating that lower confidence in this case increases your chances. Conversely, for ratings [3, 5, 7] with confidences [4, 4, 4], the acceptance probability is 31.3%, but it drops to 28.1% when the confidence changes to [4, 4, 5]. Although it might seem counterintuitive, a confidence score of 5 actually decreases your chances. One possible explanation is that many low-quality reviews rated 5 are often discounted by the Area Chairs (ACs). Hope this will be useful, and thanks to u/programmerChilli for the inspiration! I also discussed this topic in a series of tweets. submitted by /u/Lavishness-Mission [link] [comments]

  • [R] SpaceByte: Towards Deleting Tokenization from Large Language Modeling - Rice University 2024 - Practically the same performance as subword tokenizers without their many downsides!
    by /u/Singularian2501 (Machine Learning) on April 24, 2024 at 11:42 am

    Paper: https://arxiv.org/abs/2404.14408 Github: https://github.com/kjslag/spacebyte Abstract: Tokenization is widely used in large language models because it significantly improves performance. However, tokenization imposes several disadvantages, such as performance biases, increased adversarial vulnerability, decreased character-level modeling performance, and increased modeling complexity. To address these disadvantages without sacrificing performance, we propose SpaceByte, a novel byte-level decoder architecture that closes the performance gap between byte-level and subword autoregressive language modeling. SpaceByte consists of a byte-level Transformer model, but with extra larger transformer blocks inserted in the middle of the layers. We find that performance is significantly improved by applying these larger blocks only after certain bytes, such as space characters, which typically denote word boundaries. Our experiments show that for a fixed training and inference compute budget, SpaceByte outperforms other byte-level architectures and roughly matches the performance of tokenized Transformer architectures.Paper: https://arxiv.org/abs/2404.14408Github: https://github.com/kjslag/spacebyteAbstract:Tokenization is widely used in large language models because it significantly improves performance. However, tokenization imposes several disadvantages, such as performance biases, increased adversarial vulnerability, decreased character-level modeling performance, and increased modeling complexity. To address these disadvantages without sacrificing performance, we propose SpaceByte, a novel byte-level decoder architecture that closes the performance gap between byte-level and subword autoregressive language modeling. SpaceByte consists of a byte-level Transformer model, but with extra larger transformer blocks inserted in the middle of the layers. We find that performance is significantly improved by applying these larger blocks only after certain bytes, such as space characters, which typically denote word boundaries. Our experiments show that for a fixed training and inference compute budget, SpaceByte outperforms other byte-level architectures and roughly matches the performance of tokenized Transformer architectures. https://preview.redd.it/v1xo6g1gzewc1.jpg?width=1507&format=pjpg&auto=webp&s=f9d415307b60639fa67e8a54c8769fa5a6c10f04 https://preview.redd.it/edvqos1gzewc1.jpg?width=1654&format=pjpg&auto=webp&s=f91c8727017e1a1bc7b80bb77a8627ff99182607 https://preview.redd.it/fe6z6i1gzewc1.jpg?width=1181&format=pjpg&auto=webp&s=24d955f30b8ca3eaa7c527f3f40545ed493f789c submitted by /u/Singularian2501 [link] [comments]

  • [D] how can a ChatGPT generate such a response?
    by /u/No-Establishment381 (Machine Learning) on April 24, 2024 at 11:03 am

    I just typed 'a' to the prompt, and it answered like this https://preview.redd.it/edof9ug1sewc1.png?width=1136&format=png&auto=webp&s=0a122354b1bbd297187174256837122052c68fd2 I didn't provide any context (or prompt) except a single letter 'a'. However, it generated a reasonable answer. Up to my knowledge, LLM is trained by the following ways: provide a prompt, and guess the next token (iteratively) one token is masked, and the LLM should guess it Then, how is it possible to generate such a reasonable answer if the LLM has an ability to just guess the following token correctly? [Sorry if the post does not make sense. English is not my first language] submitted by /u/No-Establishment381 [link] [comments]

  • [D] Keeping track of models and their associated metadata.
    by /u/ClearlyCylindrical (Machine Learning) on April 24, 2024 at 10:20 am

    I am starting to accumulate a large number of models for a project I am working on, many of these models are old which I am keeping for archival sake, and many are fine tuned from other models. I am wondering if there is an industry standard way of dealing with this, in particular I am looking for the following: Information about parameters used to train the model Datasets used to train the model Other metadata about the model (i.e. what objects an object detection model trained for) Model performance Model lineage (What model was it fine tuned from) Model progression (Is this model a direct upgrade from some other model, such as being fine tuned from the same model but using better hyper parameters) Model source (Not sure about this, but I'm thinking some way of linking the model to the python script which was used to train it. Not crucial but something like this would be nice) Are there any tools of services which could help be achieve some of this functionality? Also, if this is not the sub for this question could I get some pointers in the correct direction. Thanks! ​ submitted by /u/ClearlyCylindrical [link] [comments]

  • [D] Deploy the fine-tuned Mistral 7B model using the Hugging Face library
    by /u/Future-Outcome3167 (Machine Learning) on April 24, 2024 at 9:31 am

    I followed the tutorial provided at https://www.datacamp.com/tutorial/mistral-7b-tutorial and now seek methods to deploy the model for faster inference using Hugging Face and Gradio. Could anyone please share a guide notebook or article for reference? Any help would be appreciated. submitted by /u/Future-Outcome3167 [link] [comments]

  • [D] Transkribus vs Tesseract for Handwritten Text Recognition (HTR)
    by /u/Pretty_Instance4483 (Machine Learning) on April 24, 2024 at 6:15 am

    I am looking for a HTR tool with the best accuracy and preferably not pricy (obviously). From my research, it seems that Transkribus was the most mentioned platform with good reviews. As I would need to convert images to text regularly I would need to pay the subscription. So I am wondering if I could use the Tesseract and/or TensorFlow Python library to achieve the same result for free. Would using Tesseract/TensorFlow be less accurate rather than using Transkribus? I learned only the basics of Machine Learning (TensorFlow, scikit-learn, keras), so I might have not enough knowledge to see the difference between the two solutions. Or is training Tesseract/TensorFlow would be challenging? submitted by /u/Pretty_Instance4483 [link] [comments]

  • [D] How researcher think of inductive bias when thinking of creating new/improving foundational models?
    by /u/binny_sarita (Machine Learning) on April 24, 2024 at 2:36 am

    I am undergradute student learning machine learning. What I got to know while reading few papers that we try to reduce search space by imposing inductive bias in machine learning models. And the success in creating useful models comes when inductive bias matches with the underlying data. In heriarchical models like NVAE how they instilled inductive bias by specifing the way data gets computed? (I thinks it's called algorithmic bias, not sure though) But how people think such inductive bias will be helpful, what is step by step procedure they go through to insist such inductive bias. I took a lot of class in machine learning and statistics but didn't got any lectures explaing such stuff. Did I missed any course/lecture? Please provide my with papers/lectures/talks related to it if possible Thankyou submitted by /u/binny_sarita [link] [comments]

  • [R] Generalized Contrastive Learning for Multi-Modal Retrieval and Ranking
    by /u/Jesse_marqo (Machine Learning) on April 23, 2024 at 11:07 pm

    Generalization of the popular training method of CLIP to be better suited for search and recommendations. Paper: https://arxiv.org/pdf/2404.08535.pdf Github: https://github.com/marqo-ai/GCL Generalises CLIP: Use any number of text and/or images to represent documents. Better text understanding by having both inter- and intra-modal losses. Can encode rank/importance/relevance, a.k.a “rank-tune”. Works with pretrained, text, CLIP models. Can learn uni- or multi-vector representations for documents. Works with binary and Matryoshka methods. Open source 10M row multi-modal dataset with 100k queries and ~5M products. Why? The prevailing methods for training embedding models are largely disconnected from the end use-case (like search), the vector database, the requirements of users, and a lack of representative datasets for development and evaluation, particularly when multiple modalities and ranking is involved. Limitations of current embedding models for vector search Although vector search is very powerful and enables searching across just about any data, the current methods have some limitations. The prevailing methods for training embedding models are largely disconnected from the end use-case (like search), the vector database, and the requirements of users. This means that a lot of the potential of vector search is being unmet. Some of the current challenges are described below. Restricted to using a single piece of information to represent a document Current models encode and represent one piece of information with one vector. The reality is that often there are multiple pieces of pertinent information for a document that may span multiple modalities. For example, in product search there may be a title, description, reviews, and multiple images, each with its own caption. GCL generalises embedding model training to use as many pieces of information as is desired. No notion of rank when dealing with degenerate queries When there are degenerate queries - multiple results that satisfy some criteria of relevance - the ordering of the results is only ever learned indirectly from the many binary relationships. In reality, the ordering of results matters, even for first stage retrieval. GCL allows for the magnitude of query-document specific relevance to be encoded in the embeddings and improves ranking of candidate documents. Poor text understanding when using CLIP like methods For multi-modal models like CLIP, these are trained to only work from image to text (and vice versa). The text-text understanding is not as good as text only models due to the text-text relationships being learned indirectly through images. For many applications, having both inter- and intra-modality understanding is required. GCL allows for any combination of inter- and intra-modal understanding by directly optimizing for this. Lack of representative datasets to develop methods for vector search In developing GCL, it became apparent there was a disconnect with publicly available datasets for embedding model training and evaluation for real-world use cases. Existing benchmarks are typically text only or inter-modal only and focus on the 1-1 query-result paradigm. Additionally, existing datasets have limited notions of relevance, with the majority encoding it as a binary relationship while several use (up-to) a handful of discrete categorizations often on the test set only. This differs from a typical real-world use cases where relevance can be both hard binary relationships or come from continuous variables. To help with this we compiled a dataset of 10M (ranked) product-query pairs, across ~100k queries, nearly 5M products, and four evaluation splits (available here). ​ submitted by /u/Jesse_marqo [link] [comments]

  • [D] Practical uses of AI inside companies
    by /u/CJSF (Machine Learning) on April 23, 2024 at 10:25 pm

    How are people using AI inside companies (startups -> FAANG) to improve operations and processes? There is so much talk about leveraging LLM’s and GenAI but I’m struggling to find real concrete examples that are successful. The following areas come to mind first but this list isn’t exhaustive of course: Design (and handoff) Engineering Customer Support Sales Documentation Marketing What’s worked or shown promise? What hasn’t worked? submitted by /u/CJSF [link] [comments]

  • Meta does everything OpenAI should be [D]
    by /u/ReputationMindless32 (Machine Learning) on April 23, 2024 at 10:03 pm

    I'm surprised (or maybe not) to say this, but Meta (or Facebook) democratises AI/ML much more than OpenAI, which was originally founded and primarily funded for this purpose. OpenAI has largely become a commercial project for profit only. Although as far as Llama models go, they don't yet reach GPT4 capabilities for me, but I believe it's only a matter of time. What do you guys think about this? submitted by /u/ReputationMindless32 [link] [comments]

  • Accelerate ML workflows with Amazon SageMaker Studio Local Mode and Docker support
    by Shweta Singh (AWS Machine Learning Blog) on April 23, 2024 at 7:20 pm

    We are excited to announce two new capabilities in Amazon SageMaker Studio that will accelerate iterative development for machine learning (ML) practitioners: Local Mode and Docker support. ML model development often involves slow iteration cycles as developers switch between coding, training, and deployment. Each step requires waiting for remote compute resources to start up, which

  • [D] Speech to Text Word Level Timestamps Accuracy Issue
    by /u/Mindless-Ordinary485 (Machine Learning) on April 23, 2024 at 7:18 pm

    I've had a lot of success with Whisper when it comes to transcriptions, but word level timestamps seems to be slightly inaccurate. From my understanding ("Whisper cannot provide reliable word timestamps, because the END-TO-END models like Transformer using cross-entropy training criterion are not designed for reliably estimating word timestamps." https://www.youtube.com/watch?v=H576iCWt1Co&t=192s) For my use case, I need precise word level timestamps, because I'm doing audio insertion after specific words. This becomes problematic when I do an insertion and the back part of a word ends up on the other side. Example: Given an original audio file with speech that has been transcribed, If I want to insert a clip at the end of the word "France", and according to the timestamp, the word "France" starts at 19.26 and ends at 19.85, I will insert the clip at 19.85. However, if the actual end of France is at 19.92, then when I insert the laugher at 19.85, I will here the remaining "France", likely "ce" (0.07), at the end. I'm curious if anyone has been posed with a similar problem and what they did to get around this? I've experimented with a few open source variations of whisper, but still running into that issue. submitted by /u/Mindless-Ordinary485 [link] [comments]

  • [R] Wu's Method can Boost Symbolic AI to Rival Silver Medalists and AlphaGeometry to Outperform Gold Medalists at IMO Geometry
    by /u/SeawaterFlows (Machine Learning) on April 23, 2024 at 7:11 pm

    Paper: https://arxiv.org/abs/2404.06405 Code: https://huggingface.co/datasets/bethgelab/simplegeometry Abstract: Proving geometric theorems constitutes a hallmark of visual reasoning combining both intuitive and logical skills. Therefore, automated theorem proving of Olympiad-level geometry problems is considered a notable milestone in human-level automated reasoning. The introduction of AlphaGeometry, a neuro-symbolic model trained with 100 million synthetic samples, marked a major breakthrough. It solved 25 of 30 International Mathematical Olympiad (IMO) problems whereas the reported baseline based on Wu's method solved only ten. In this note, we revisit the IMO-AG-30 Challenge introduced with AlphaGeometry, and find that Wu's method is surprisingly strong. Wu's method alone can solve 15 problems, and some of them are not solved by any of the other methods. This leads to two key findings: (i) Combining Wu's method with the classic synthetic methods of deductive databases and angle, ratio, and distance chasing solves 21 out of 30 methods by just using a CPU-only laptop with a time limit of 5 minutes per problem. Essentially, this classic method solves just 4 problems less than AlphaGeometry and establishes the first fully symbolic baseline strong enough to rival the performance of an IMO silver medalist. (ii) Wu's method even solves 2 of the 5 problems that AlphaGeometry failed to solve. Thus, by combining AlphaGeometry with Wu's method we set a new state-of-the-art for automated theorem proving on IMO-AG-30, solving 27 out of 30 problems, the first AI method which outperforms an IMO gold medalist. submitted by /u/SeawaterFlows [link] [comments]

  • [D] Method to generate shapely contributions without model object
    by /u/ozymandias_514 (Machine Learning) on April 23, 2024 at 6:08 pm

    Is there a method to generate the approximations of Shapely values (or something similar) for a data without using model object. Essentialy I input features and model predictions on benchmark data, and the same for test data, and output is contributions for each feature on test data submitted by /u/ozymandias_514 [link] [comments]

  • [P] A Python Intelligence Config Manager. Superset of hydra+pydantic+lsp
    by /u/cssunfu (Machine Learning) on April 23, 2024 at 3:57 pm

    I developed a very powerful Python Config Management Tool. It can make your json config as powerful as python code. And very friendly to humans. The most attractive feature is that it will analyze Python code and json config file in real time, provide document display, parameter completion, and goto Python definition from the json config. (Powered by LSP) Similar or better config inheritance, parameter reference and parameter grid search like hydra Data validation similar to pydantic, and the ability to convert dataclass to json-schema This project is still in its early stages, and everyone is welcome to provide some suggestions and ideas. git repo submitted by /u/cssunfu [link] [comments]

  • [N] Phi-3-mini released on HuggingFace
    by /u/topcodemangler (Machine Learning) on April 23, 2024 at 3:26 pm

    https://huggingface.co/microsoft/Phi-3-mini-128k-instruct The numbers in the technical report look really great, I guess need to be verified by 3rd parties. submitted by /u/topcodemangler [link] [comments]

  • [D]Drastic Change in the Accuracy score and other measures after hyper parameter tuning.
    by /u/Saheenus (Machine Learning) on April 23, 2024 at 1:41 pm

    Hey, I am currently doing a malware classification (malware,benign).Used the naive Bayes(Bernoulli) the accuracy was 67 at this point.After the tuning it goes straight up 100. Is this normal or not? I did Outlier removal using the IQR and feature selection using the correlation. submitted by /u/Saheenus [link] [comments]

  • [D] How to and Deploy LLaMA 3 Into Production, and Hardware Requirements
    by /u/juliensalinas (Machine Learning) on April 23, 2024 at 12:33 pm

    Many are trying to install and deploy their own LLaMA 3 model, so here is a tutorial I just made showing how to deploy LLaMA 3 on an AWS EC2 instance: https://nlpcloud.com/how-to-install-and-deploy-llama-3-into-production.html Deploying LLaMA 3 8B is fairly easy but LLaMA 3 70B is another beast. Given the amount of VRAM needed you might want to provision more than one GPU and use a dedicated inference server like vLLM in order to split your model on several GPUs. LLaMA 3 8B requires around 16GB of disk space and 20GB of VRAM (GPU memory) in FP16. As for LLaMA 3 70B, it requires around 140GB of disk space and 160GB of VRAM in FP16. I hope it is useful, and if you have questions please don't hesitate to ask! Julien submitted by /u/juliensalinas [link] [comments]

  • Significant new capabilities make it easier to use Amazon Bedrock to build and scale generative AI applications – and achieve impressive results
    by Swami Sivasubramanian (AWS Machine Learning Blog) on April 23, 2024 at 11:50 am

    We introduced Amazon Bedrock to the world a little over a year ago, delivering an entirely new way to build generative artificial intelligence (AI) applications. With the broadest selection of first- and third-party foundation models (FMs) as well as user-friendly capabilities, Amazon Bedrock is the fastest and easiest way to build and scale secure generative

  • Building scalable, secure, and reliable RAG applications using Knowledge Bases for Amazon Bedrock
    by Mani Khanuja (AWS Machine Learning Blog) on April 23, 2024 at 11:40 am

    This post explores the new enterprise-grade features for Knowledge Bases on Amazon Bedrock and how they align with the AWS Well-Architected Framework. With Knowledge Bases for Amazon Bedrock, you can quickly build applications using Retrieval Augmented Generation (RAG) for use cases like question answering, contextual chatbots, and personalized search.

  • [D] Are there any MoE models other than LLMs?
    by /u/lime_52 (Machine Learning) on April 23, 2024 at 9:58 am

    Is MoE architecture also applied in other ML areas, let’s say Computer Vision? Why aren’t they popular? Is it because we don’t scale vision transformers as much as LLMs, and MoE is best for scalability? submitted by /u/lime_52 [link] [comments]

  • [D] What best practices and workflows those working solo as DS/MLE should keep in mind?
    by /u/Melodic_Reality_646 (Machine Learning) on April 23, 2024 at 9:40 am

    I'm wondering what technical recruiters or seasoned DS/MLE have to say about people with profiles like mine: good theoretical and decent technical background but working solo for too long. Summary of my career for context: I've been working 8 years now as a DS, the first 3 in medium sized R&D and consulting teams (for a big tech company), then for the past 5 as a solo DS for relatively successful non-ai focused start-ups, mostly developing ML/NLP stuff to address specific issues or improve one specific feature of their product (i.e. never a whole product). In 5 years I designed. developed and deployed, say, 4 models (but experimented with many ofc) - along with a few dashboards and simple streamlit POCs). Recently attending to meetups and seeing how people that make part of actual teams work, discuss and exchange knowledge it suddenly stroke me: I'm missing out, I'm becoming obsolete. I dont feel sharp enough for technical interviews, I'm not sure the way I develop and maintain my projects are following good standards/best practices (heck, i hardly follow a kanban, mostly use my planner to report to my boss on progress). I do some version control and document what I put into prod, but not even that I'm sure I'm doing as it'd be expected within a team. submitted by /u/Melodic_Reality_646 [link] [comments]

  • [R] Seeking Expert Reviewers for Neural Network-Based Thermal Diffusion Research
    by /u/No-Palpitation-7229 (Machine Learning) on April 23, 2024 at 8:44 am

    Hello everyone, I'm preparing to submit a research paper and need to identify potential reviewers. My paper introduces a novel approach to solving thermal diffusion problems in steel rods using neural networks. Traditional methods often struggle with complex boundary conditions or nonlinear material properties, but our neural network model, trained on solutions from classical analytical methods, shows promise in predicting temperature distributions accurately with a low error margin. I'm looking for experts with a Ph.D. or M.D. and significant experience in physics, thermal dynamics, or related fields of machine learning. If you have expertise in these areas or know someone who does, I would greatly appreciate your input or referral. What I've done so far: Developed the neural network model and tested it against classical solutions. Drafted the manuscript detailing methodologies, results, and implications. I'm facing a challenge in finding suitable reviewers who have a deep understanding of both thermal physics and machine learning applications. Any guidance or suggestions from this community would be incredibly helpful. Thank you for considering my request! Best regards, Ed submitted by /u/No-Palpitation-7229 [link] [comments]

  • [D] Gated Long-Term Memory
    by /u/jessielesbian (Machine Learning) on April 23, 2024 at 7:52 am

    Today, I am presenting my latest idea: Gated Long-Term Memory GLTM unit Gated Long-Term Memory tries to implement an efficient LSTM alternative. Unlike LSTM, GLTM does all the heavy lifting in parallel, the only operations that are performed sequentially are the multiplication and addition operations. Gated Long-Term Memory uses only linear memory, compared to the quadratic memory of Transformers. submitted by /u/jessielesbian [link] [comments]

  • [D] Zotero Organization
    by /u/Relative_Tip_3647 (Machine Learning) on April 23, 2024 at 7:10 am

    People who are using Zotero for organizing and reading research papers, how you guys are using collections, subcollections or tags? Literally, I want to know what are you doing reseach on (vision, language, ...) and what collections, subcollections or tag are you using and how? Recently I have started using Zoteor and I am really confused about it. Looking for inpirations from other people. Thanks in advance. submitted by /u/Relative_Tip_3647 [link] [comments]

  • [D] Phi-3 to be released soon
    by /u/yusuf-bengio (Machine Learning) on April 23, 2024 at 1:13 am

    Heard from two independent sources at MSFT (one close to Sebastien Bubeck) about the upcoming Phi-3 models: Three different sized models (up to 14B) Again, mostly synthetic and LLM-augmented training data Apparently some upscaling techniques on the training side No more Apache 2 but more restrictive license (similar to llama3) Mixtral level performance with much fewer parameters I wanted to see if anyone has more insider information about the models. submitted by /u/yusuf-bengio [link] [comments]

  • Integrate HyperPod clusters with Active Directory for seamless multi-user login
    by Tomonori Shimomura (AWS Machine Learning Blog) on April 22, 2024 at 5:50 pm

    Amazon SageMaker HyperPod is purpose-built to accelerate foundation model (FM) training, removing the undifferentiated heavy lifting involved in managing and optimizing a large training compute cluster. With SageMaker HyperPod, you can train FMs for weeks and months without disruption. Typically, HyperPod clusters are used by multiple users: machine learning (ML) researchers, software engineers, data scientists,

  • The executive’s guide to generative AI for sustainability
    by Wafae Bakkali (AWS Machine Learning Blog) on April 22, 2024 at 5:40 pm

    Organizations are facing ever-increasing requirements for sustainability goals alongside environmental, social, and governance (ESG) practices. A Gartner, Inc. survey revealed that 87 percent of business leaders expect to increase their organization’s investment in sustainability over the next years. This post serves as a starting point for any executive seeking to navigate the intersection of generative

  • [D] Llama-3 may have just killed proprietary AI models
    by /u/madredditscientist (Machine Learning) on April 22, 2024 at 3:08 pm

    Full Blog Post Meta released Llama-3 only three days ago, and it already feels like the inflection point when open source models finally closed the gap with proprietary models. The initial benchmarks show that Llama-3 70B comes pretty close to GPT-4 in many tasks: The official Meta page only shows that Llama-3 outperforms Gemini 1.5 and Claude Sonnet. Artificial Analysis shows that Llama-3 is in-between Gemini-1.5 and Opus/GPT-4 for quality. On LMSYS Chatbot Arena Leaderboard, Llama-3 is ranked #5 while current GPT-4 models and Claude Opus are still tied at #1. The even more powerful Llama-3 400B+ model is still in training and is likely to surpass GPT-4 and Opus once released. Meta vs OpenAI Some speculate that Meta's goal from the start was to target OpenAI with a "scorched earth" approach by releasing powerful open models to disrupt the competitive landscape and avoid being left behind in the AI race. Meta can likely outspend OpenAI on compute and talent: OpenAI makes an estimated revenue of $2B and is likely unprofitable. Meta generated a revenue of $134B and profits of $39B in 2023. Meta's compute resources likely outrank OpenAI by now. Open source likely attracts better talent and researchers. One possible outcome could be the acquisition of OpenAI by Microsoft to catch up with Meta. Google is also making moves into the open model space and has similar capabilities to Meta. It will be interesting to see where they fit in. The Winners: Developers and AI Product Startups I recently wrote about the excitement of building an AI startup right now, as your product automatically improves with each major model advancement. With the release of Llama-3, the opportunities for developers are even greater: No more vendor lock-in. Instead of just wrapping proprietary API endpoints, developers can now integrate AI deeply into their products in a very cost-effective and performant way. There are already over 800 llama-3 models variations on Hugging Face, and it looks like everyone will be able to fine-tune for their us-cases, languages, or industry. Faster, cheaper hardware: Groq can now generate 800 llama-3 tokens per second at a small fraction of the GPT costs. Near-instant LLM responses at low prices are on the horizon. Open source multimodal models for vision and video still have to catch up, but I expect this to happen very soon. The release of Llama-3 marks a significant milestone in the democratization of AI, but it's probably too early to declare the death of proprietary models. Who knows, maybe GPT-5 will surprise us all and surpass our imaginations of what transformer models can do. These are definitely super exciting times to build in the AI space! submitted by /u/madredditscientist [link] [comments]

  • [D] Simple Questions Thread
    by /u/AutoModerator (Machine Learning) on April 21, 2024 at 3:00 pm

    Please post your questions here instead of creating a new thread. Encourage others who create new posts for questions to post here instead! Thread will stay alive until next one so keep posting after the date in the title. Thanks to everyone for answering questions in the previous thread! submitted by /u/AutoModerator [link] [comments]

  • Introducing automatic training for solutions in Amazon Personalize
    by Ba'Carri Johnson (AWS Machine Learning Blog) on April 20, 2024 at 12:38 am

    Amazon Personalize is excited to announce automatic training for solutions. Solution training is fundamental to maintain the effectiveness of a model and make sure recommendations align with users’ evolving behaviors and preferences. As data patterns and trends change over time, retraining the solution with the latest relevant data enables the model to learn and adapt,

  • Use Kubernetes Operators for new inference capabilities in Amazon SageMaker that reduce LLM deployment costs by 50% on average
    by Rajesh Ramchander (AWS Machine Learning Blog) on April 19, 2024 at 4:55 pm

    We are excited to announce a new version of the Amazon SageMaker Operators for Kubernetes using the AWS Controllers for Kubernetes (ACK). ACK is a framework for building Kubernetes custom controllers, where each controller communicates with an AWS service API. These controllers allow Kubernetes users to provision AWS resources like buckets, databases, or message queues

  • Talk to your slide deck using multimodal foundation models hosted on Amazon Bedrock – Part 2
    by Archana Inapudi (AWS Machine Learning Blog) on April 19, 2024 at 3:15 pm

    In Part 1 of this series, we presented a solution that used the Amazon Titan Multimodal Embeddings model to convert individual slides from a slide deck into embeddings. We stored the embeddings in a vector database and then used the Large Language-and-Vision Assistant (LLaVA 1.5-7b) model to generate text responses to user questions based on

  • Scale AI training and inference for drug discovery through Amazon EKS and Karpenter
    by Matthew Welborn (AWS Machine Learning Blog) on April 19, 2024 at 3:07 pm

    This is a guest post co-written with the leadership team of Iambic Therapeutics. Iambic Therapeutics is a drug discovery startup with a mission to create innovative AI-driven technologies to bring better medicines to cancer patients, faster. Our advanced generative and predictive artificial intelligence (AI) tools enable us to search the vast space of possible drug

  • Generate customized, compliant application IaC scripts for AWS Landing Zone using Amazon Bedrock
    by Ebbey Thomas (AWS Machine Learning Blog) on April 18, 2024 at 5:57 pm

    As you navigate the complexities of cloud migration, the need for a structured, secure, and compliant environment is paramount. AWS Landing Zone addresses this need by offering a standardized approach to deploying AWS resources. This makes sure your cloud foundation is built according to AWS best practices from the start. With AWS Landing Zone, you eliminate the guesswork in security configurations, resource provisioning, and account management. It’s particularly beneficial for organizations looking to scale without compromising on governance or control, providing a clear path to a robust and efficient cloud setup. In this post, we show you how to generate customized, compliant IaC scripts for AWS Landing Zone using Amazon Bedrock.

  • Live Meeting Assistant with Amazon Transcribe, Amazon Bedrock, and Knowledge Bases for Amazon Bedrock
    by Bob Strahan (AWS Machine Learning Blog) on April 18, 2024 at 5:08 pm

    You’ve likely experienced the challenge of taking notes during a meeting while trying to pay attention to the conversation. You’ve probably also experienced the need to quickly fact-check something that’s been said, or look up information to answer a question that’s just been asked in the call. Or maybe you have a team member that always joins meetings late, and expects you to send them a quick summary over chat to catch them up. Then there are the times that others are talking in a language that’s not your first language, and you’d love to have a live translation of what people are saying to make sure you understand correctly. And after the call is over, you usually want to capture a summary for your records, or to send to the participants, with a list of all the action items, owners, and due dates. All of this, and more, is now possible with our newest sample solution, Live Meeting Assistant (LMA).

  • Meta Llama 3 models are now available in Amazon SageMaker JumpStart
    by Kyle Ulrich (AWS Machine Learning Blog) on April 18, 2024 at 4:31 pm

    Today, we are excited to announce that Meta Llama 3 foundation models are available through Amazon SageMaker JumpStart to deploy and run inference. The Llama 3 models are a collection of pre-trained and fine-tuned generative text models. In this post, we walk through how to discover and deploy Llama 3 models via SageMaker JumpStart. What is

  • Slack delivers native and secure generative AI powered by Amazon SageMaker JumpStart
    by Jackie Rocca (AWS Machine Learning Blog) on April 18, 2024 at 12:00 pm

    We are excited to announce that Slack, a Salesforce company, has collaborated with Amazon SageMaker JumpStart to power Slack AI’s initial search and summarization features and provide safeguards for Slack to use large language models (LLMs) more securely. Slack worked with SageMaker JumpStart to host industry-leading third-party LLMs so that data is not shared with the infrastructure owned by third party model providers. This keeps customer data in Slack at all times and upholds the same security practices and compliance standards that customers expect from Slack itself.

  • Uncover hidden connections in unstructured financial data with Amazon Bedrock and Amazon Neptune
    by Xan Huang (AWS Machine Learning Blog) on April 17, 2024 at 3:00 pm

    In asset management, portfolio managers need to closely monitor companies in their investment universe to identify risks and opportunities, and guide investment decisions. Tracking direct events like earnings reports or credit downgrades is straightforward—you can set up alerts to notify managers of news containing company names. However, detecting second and third-order impacts arising from events

  • Open source observability for AWS Inferentia nodes within Amazon EKS clusters
    by Riccardo Freschi (AWS Machine Learning Blog) on April 17, 2024 at 2:54 pm

    This post walks you through the Open Source Observability pattern for AWS Inferentia, which shows you how to monitor the performance of ML chips, used in an Amazon Elastic Kubernetes Service (Amazon EKS) cluster, with data plane nodes based on Amazon Elastic Compute Cloud (Amazon EC2) instances of type Inf1 and Inf2.

  • Explore data with ease: Use SQL and Text-to-SQL in Amazon SageMaker Studio JupyterLab notebooks
    by Pranav Murthy (AWS Machine Learning Blog) on April 16, 2024 at 11:00 pm

    Amazon SageMaker Studio provides a fully managed solution for data scientists to interactively build, train, and deploy machine learning (ML) models. In the process of working on their ML tasks, data scientists typically start their workflow by discovering relevant data sources and connecting to them. They then use SQL to explore, analyze, visualize, and integrate

  • Distributed training and efficient scaling with the Amazon SageMaker Model Parallel and Data Parallel Libraries
    by Xinle Sheila Liu (AWS Machine Learning Blog) on April 16, 2024 at 4:18 pm

    In this post, we explore the performance benefits of Amazon SageMaker (including SMP and SMDDP), and how you can use the library to train large models efficiently on SageMaker. We demonstrate the performance of SageMaker with benchmarks on ml.p4d.24xlarge clusters up to 128 instances, and FSDP mixed precision with bfloat16 for the Llama 2 model.

  • Manage your Amazon Lex bot via AWS CloudFormation templates
    by Thomas Rindfuss (AWS Machine Learning Blog) on April 16, 2024 at 4:11 pm

    Amazon Lex is a fully managed artificial intelligence (AI) service with advanced natural language models to design, build, test, and deploy conversational interfaces in applications. It employs advanced deep learning technologies to understand user input, enabling developers to create chatbots, virtual assistants, and other applications that can interact with users in natural language. Managing your

  • A secure approach to generative AI with AWS
    by Anthony Liguori (AWS Machine Learning Blog) on April 16, 2024 at 4:00 pm

    Generative artificial intelligence (AI) is transforming the customer experience in industries across the globe. Customers are building generative AI applications using large language models (LLMs) and other foundation models (FMs), which enhance customer experiences, transform operations, improve employee productivity, and create new revenue channels. The biggest concern we hear from customers as they explore the advantages of generative AI is how to protect their highly sensitive data and investments. At AWS, our top priority is safeguarding the security and confidentiality of our customers' workloads. We think about security across the three layers of our generative AI stack ...

  • Cost-effective document classification using the Amazon Titan Multimodal Embeddings Model
    by Sumit Bhati (AWS Machine Learning Blog) on April 11, 2024 at 7:21 pm

    Organizations across industries want to categorize and extract insights from high volumes of documents of different formats. Manually processing these documents to classify and extract information remains expensive, error prone, and difficult to scale. Advances in generative artificial intelligence (AI) have given rise to intelligent document processing (IDP) solutions that can automate the document classification,

Download AWS machine Learning Specialty Exam Prep App on iOs

AWS machine learning certification prep
AWS machine learning certification prep

Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon

Download AWS machine Learning Specialty Exam Prep App on iOs

Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon

Pass the 2024 AWS Cloud Practitioner CCP CLF-C01 Certification with flying colors Ace the 2024 AWS Solutions Architect Associate SAA-C03 Exam with Confidence