You can translate the content of this page by selecting a language in the select box.
The AWS Certified Machine Learning Specialty validates expertise in building, training, tuning, and deploying machine learning (ML) models on AWS.
Use this App to learn about Machine Learning on AWS and prepare for the AWS Machine Learning Specialty Certification MLS-C01.
Download AWS machine Learning Specialty Exam Prep App on iOs
Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon
[appbox appstore 1611045854-iphone screenshots]
[appbox microsoftstore 9n8rl80hvm4t-mobile screenshots]
Download AWS machine Learning Specialty Exam Prep App on iOs
Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon
The App provides hundreds of quizzes and practice exam about:
– Machine Learning Operation on AWS
– Modelling
– Data Engineering
– Computer Vision,
– Exploratory Data Analysis,
– ML implementation & Operations
– Machine Learning Basics Questions and Answers
– Machine Learning Advanced Questions and Answers
– Scorecard
– Countdown timer
– Machine Learning Cheat Sheets
– Machine Learning Interview Questions and Answers
– Machine Learning Latest News
The App covers Machine Learning Basics and Advanced topics including: NLP, Computer Vision, Python, linear regression, logistic regression, Sampling, dataset, statistical interaction, selection bias, non-Gaussian distribution, bias-variance trade-off, Normal Distribution, correlation and covariance, Point Estimates and Confidence Interval, A/B Testing, p-value, statistical power of sensitivity, over-fitting and under-fitting, regularization, Law of Large Numbers, Confounding Variables, Survivorship Bias, univariate, bivariate and multivariate, Resampling, ROC curve, TF/IDF vectorization, Cluster Sampling, etc.
Domain 1: Data Engineering
Create data repositories for machine learning.
Identify data sources (e.g., content and location, primary sources such as user data)
Determine storage mediums (e.g., DB, Data Lake, S3, EFS, EBS)
Identify and implement a data ingestion solution.
Data job styles/types (batch load, streaming)
Data ingestion pipelines (Batch-based ML workloads and streaming-based ML workloads), etc.
Domain 2: Exploratory Data Analysis
Sanitize and prepare data for modeling.
Perform feature engineering.
Analyze and visualize data for machine learning.
Domain 3: Modeling
Frame business problems as machine learning problems.
Select the appropriate model(s) for a given machine learning problem.
Train machine learning models.
Perform hyperparameter optimization.
Evaluate machine learning models.
Domain 4: Machine Learning Implementation and Operations
Build machine learning solutions for performance, availability, scalability, resiliency, and fault
tolerance.
Recommend and implement the appropriate machine learning services and features for a given
problem.
Apply basic AWS security practices to machine learning solutions.
Deploy and operationalize machine learning solutions.
Machine Learning Services covered:
Amazon Comprehend
AWS Deep Learning AMIs (DLAMI)
AWS DeepLens
Amazon Forecast
Amazon Fraud Detector
Amazon Lex
Amazon Polly
Amazon Rekognition
Amazon SageMaker
Amazon Textract
Amazon Transcribe
Amazon Translate
Other Services and topics covered are:
Ingestion/Collection
Processing/ETL
Data analysis/visualization
Model training
Model deployment/inference
Operational
AWS ML application services
Language relevant to ML (for example, Python, Java, Scala, R, SQL)
Notebooks and integrated development environments (IDEs),
S3, SageMaker, Kinesis, Lake Formation, Athena, Kibana, Redshift, Textract, EMR, Glue, SageMaker, CSV, JSON, IMG, parquet or databases, Amazon Athena
Amazon EC2, Amazon Elastic Container Registry (Amazon ECR), Amazon Elastic Container Service, Amazon Elastic Kubernetes Service , Amazon Redshift
Important: To succeed with the real exam, do not memorize the answers in this app. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
Note and disclaimer: We are not affiliated with Microsoft or Azure or Google or Amazon. The questions are put together based on the certification study guide and materials available online. The questions in this app should help you pass the exam but it is not guaranteed. We are not responsible for any exam you did not pass.
Download AWS machine Learning Specialty Exam Prep App on iOs
Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon
- Amazon EC2 P5e instances are generally availableby Avi Kulkarni (AWS Machine Learning Blog) on September 9, 2024 at 10:20 pm
In this post, we discuss the core capabilities of Amazon Elastic Compute Cloud (Amazon EC2) P5e instances and the use cases they’re well-suited for. We walk you through an example of how to get started with these instances and carry out inference deployment of Meta Llama 3.1 70B and 405B models on them.
- Exploring data using AI chat at Domo with Amazon Bedrockby Joe Clark (AWS Machine Learning Blog) on September 9, 2024 at 9:51 pm
In this post, we share how Domo, a cloud-centered data experiences innovator is using Amazon Bedrock to provide a flexible and powerful AI solution.
- How Vidmob is using generative AI to transform its creative data landscapeby Mickey Alon (AWS Machine Learning Blog) on September 6, 2024 at 9:12 pm
In this post, we illustrate how Vidmob, a creative data company, worked with the AWS Generative AI Innovation Center (GenAIIC) team to uncover meaningful insights at scale within creative data using Amazon Bedrock.
- Fine-tune Llama 3 for text generation on Amazon SageMaker JumpStartby Ben Friebe (AWS Machine Learning Blog) on September 6, 2024 at 8:22 pm
In this post, we demonstrate how to fine-tune the recently released Llama 3 models from Meta, specifically the llama-3-8b and llama-3-70b variants, using Amazon SageMaker JumpStart.
- Ground truth curation and metric interpretation best practices for evaluating generative AI question answering using FMEvalby Samantha Stuart (AWS Machine Learning Blog) on September 6, 2024 at 8:11 pm
In this post, we discuss best practices for working with Foundation Model Evaluations Library (FMEval) in ground truth curation and metric interpretation for evaluating question answering applications for factual knowledge and quality.
- Build powerful RAG pipelines with LlamaIndex and Amazon Bedrockby Shreyas Subramanian (AWS Machine Learning Blog) on September 5, 2024 at 9:53 pm
In this post, we show you how to use LlamaIndex with Amazon Bedrock to build robust and sophisticated RAG pipelines that unlock the full potential of LLMs for knowledge-intensive tasks.
- Evaluating prompts at scale with Prompt Management and Prompt Flows for Amazon Bedrockby Antonio Rodriguez (AWS Machine Learning Blog) on September 5, 2024 at 8:11 pm
In this post, we demonstrate how to implement an automated prompt evaluation system using Amazon Bedrock so you can streamline your prompt development process and improve the overall quality of your AI-generated content.
- Deploy Amazon SageMaker pipelines using AWS Controllers for Kubernetesby Pratik Yeole (AWS Machine Learning Blog) on September 4, 2024 at 10:40 pm
In this post, we show how ML engineers familiar with Jupyter notebooks and SageMaker environments can efficiently work with DevOps engineers familiar with Kubernetes and related tools to design and maintain an ML pipeline with the right infrastructure for their organization. This enables DevOps engineers to manage all the steps of the ML lifecycle with the same set of tools and environment they are used to.
- Effectively manage foundation models for generative AI applications with Amazon SageMaker Model Registryby Chaitra Mathur (AWS Machine Learning Blog) on September 4, 2024 at 10:34 pm
In this post, we explore the new features of Model Registry that streamline foundation model (FM) management: you can now register unzipped model artifacts and pass an End User License Agreement (EULA) acceptance flag without needing users to intervene.
- Build an ecommerce product recommendation chatbot with Amazon Bedrock Agentsby Mahmoud Salaheldin (AWS Machine Learning Blog) on September 4, 2024 at 9:10 pm
In this post, we show you how to build an ecommerce product recommendation chatbot using Amazon Bedrock Agents and foundation models (FMs) available in Amazon Bedrock.
- How Thomson Reuters Labs achieved AI/ML innovation at pace with AWS MLOps servicesby Andrei Voinov (AWS Machine Learning Blog) on September 4, 2024 at 8:22 pm
In this post, we show you how Thomson Reuters Labs (TR Labs) was able to develop an efficient, flexible, and powerful MLOps process by adopting a standardized MLOps framework that uses AWS SageMaker, SageMaker Experiments, SageMaker Model Registry, and SageMaker Pipelines. The goal being to accelerate how quickly teams can experiment and innovate using AI and machine learning (ML)—whether using natural language processing (NLP), generative AI, or other techniques. We discuss how this has helped decrease the time to market for fresh ideas and helped build a cost-efficient machine learning lifecycle.
- Build a generative AI image description application with Anthropic’s Claude 3.5 Sonnet on Amazon Bedrock and AWS CDKby Dinesh Sajwan (AWS Machine Learning Blog) on September 3, 2024 at 9:13 pm
In this post, we delve into the process of building and deploying a sample application capable of generating multilingual descriptions for multiple images with a Streamlit UI, AWS Lambda powered with the Amazon Bedrock SDK, and AWS AppSync driven by the open source Generative AI CDK Constructs.
- Use LangChain with PySpark to process documents at massive scale with Amazon SageMaker Studio and Amazon EMR Serverlessby Raj Ramasubbu (AWS Machine Learning Blog) on September 3, 2024 at 7:05 pm
In this post, we explore how to build a scalable and efficient Retrieval Augmented Generation (RAG) system using the new EMR Serverless integration, Spark’s distributed processing, and an Amazon OpenSearch Service vector database powered by the LangChain orchestration framework. This solution enables you to process massive volumes of textual data, generate relevant embeddings, and store them in a powerful vector database for seamless retrieval and generation.
- Best practices for prompt engineering with Meta Llama 3 for Text-to-SQL use casesby Marco Punio (AWS Machine Learning Blog) on August 30, 2024 at 8:36 pm
In this post, we explore a solution that uses the vector engine ChromaDB and Meta Llama 3, a publicly available foundation model hosted on SageMaker JumpStart, for a Text-to-SQL use case. We shared a brief history of Meta Llama 3, best practices for prompt engineering with Meta Llama 3 models, and an architecture pattern using few-shot prompting and RAG to extract the relevant schemas stored as vectors in ChromaDB.
- Implementing advanced prompt engineering with Amazon Bedrockby Jonah Craig (AWS Machine Learning Blog) on August 30, 2024 at 8:09 pm
In this post, we provide insights and practical examples to help balance and optimize the prompt engineering workflow. We focus on advanced prompt techniques and best practices for the models provided in Amazon Bedrock, a fully managed service that offers a choice of high-performing foundation models from leading AI companies such as Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API. With these prompting techniques, developers and researchers can harness the full capabilities of Amazon Bedrock, providing clear and concise communication while mitigating potential risks or undesirable outputs.
- Accelerate Generative AI Inference with NVIDIA NIM Microservices on Amazon SageMakerby Saurabh Trikande (AWS Machine Learning Blog) on August 29, 2024 at 10:26 pm
In this post, we provide a walkthrough of how customers can use generative artificial intelligence (AI) models and LLMs using NVIDIA NIM integration with SageMaker. We demonstrate how this integration works and how you can deploy these state-of-the-art models on SageMaker, optimizing their performance and cost.
- Celebrating the final AWS DeepRacer League championship and road aheadby Shashank Murthy (AWS Machine Learning Blog) on August 29, 2024 at 7:01 pm
The AWS DeepRacer League is the world’s first autonomous racing league, open to everyone and powered by machine learning (ML). AWS DeepRacer brings builders together from around the world, creating a community where you learn ML hands-on through friendly autonomous racing competitions. As we celebrate the achievements of over 560,000 participants from more than 150 countries who sharpened their skills through the AWS DeepRacer League over the last 6 years, we also prepare to close this chapter with a final season that serves as both a victory lap and a launching point for what’s next in the world of AWS DeepRacer.
- Provide a personalized experience for news readers using Amazon Personalize and Amazon Titan Text Embeddings on Amazon Bedrockby Joydeep Dutta (AWS Machine Learning Blog) on August 29, 2024 at 5:00 pm
In this post, we show how you can recommend breaking news to a user using AWS AI/ML services. By taking advantage of the power of Amazon Personalize and Amazon Titan Text Embeddings on Amazon Bedrock, you can show articles to interested users within seconds of them being published.
- Implementing tenant isolation using Agents for Amazon Bedrock in a multi-tenant environmentby Ulrich Hinze (AWS Machine Learning Blog) on August 28, 2024 at 8:57 pm
In this blog post, we will show you how to implement tenant isolation using Amazon Bedrock agents within a multi-tenant environment. We’ll demonstrate this using a sample multi-tenant e-commerce application that provides a service for various tenants to create online stores. This application will use Amazon Bedrock agents to develop an AI assistant or chatbot capable of providing tenant-specific information, such as return policies and user-specific information like order counts and status updates.
- Connect the Amazon Q Business generative AI coding companion to your GitHub repositories with Amazon Q GitHub (Cloud) connectorby Maxim Chernyshev (AWS Machine Learning Blog) on August 28, 2024 at 8:29 pm
In this post, we show you how to perform natural language queries over the indexed GitHub (Cloud) data using the AI-powered chat interface provided by Amazon Q Business. We also cover how Amazon Q Business applies access control lists (ACLs) associated with the indexed documents to provide permissions-filtered responses.
Download AWS machine Learning Specialty Exam Prep App on iOs
Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon
A Twitter List by enoumenDownload AWS machine Learning Specialty Exam Prep App on iOs
Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon