You can translate the content of this page by selecting a language in the select box.
The AWS Certified Machine Learning Specialty validates expertise in building, training, tuning, and deploying machine learning (ML) models on AWS.
Use this App to learn about Machine Learning on AWS and prepare for the AWS Machine Learning Specialty Certification MLS-C01.
Download AWS machine Learning Specialty Exam Prep App on iOs
Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon
[appbox appstore 1611045854-iphone screenshots]
[appbox microsoftstore 9n8rl80hvm4t-mobile screenshots]

Download AWS machine Learning Specialty Exam Prep App on iOs
Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon
The App provides hundreds of quizzes and practice exam about:
– Machine Learning Operation on AWS
– Modelling
– Data Engineering
– Computer Vision,
– Exploratory Data Analysis,
– ML implementation & Operations
– Machine Learning Basics Questions and Answers
– Machine Learning Advanced Questions and Answers
– Scorecard
– Countdown timer
– Machine Learning Cheat Sheets
– Machine Learning Interview Questions and Answers
– Machine Learning Latest News
The App covers Machine Learning Basics and Advanced topics including: NLP, Computer Vision, Python, linear regression, logistic regression, Sampling, dataset, statistical interaction, selection bias, non-Gaussian distribution, bias-variance trade-off, Normal Distribution, correlation and covariance, Point Estimates and Confidence Interval, A/B Testing, p-value, statistical power of sensitivity, over-fitting and under-fitting, regularization, Law of Large Numbers, Confounding Variables, Survivorship Bias, univariate, bivariate and multivariate, Resampling, ROC curve, TF/IDF vectorization, Cluster Sampling, etc.
Domain 1: Data Engineering
Create data repositories for machine learning.
Identify data sources (e.g., content and location, primary sources such as user data)
Determine storage mediums (e.g., DB, Data Lake, S3, EFS, EBS)
Identify and implement a data ingestion solution.
Data job styles/types (batch load, streaming)
Data ingestion pipelines (Batch-based ML workloads and streaming-based ML workloads), etc.
Domain 2: Exploratory Data Analysis
Sanitize and prepare data for modeling.
Perform feature engineering.
Analyze and visualize data for machine learning.
Domain 3: Modeling
Frame business problems as machine learning problems.
Select the appropriate model(s) for a given machine learning problem.
Train machine learning models.
Perform hyperparameter optimization.
Evaluate machine learning models.
Domain 4: Machine Learning Implementation and Operations
Build machine learning solutions for performance, availability, scalability, resiliency, and fault
tolerance.
Recommend and implement the appropriate machine learning services and features for a given
problem.
Apply basic AWS security practices to machine learning solutions.
Deploy and operationalize machine learning solutions.
Machine Learning Services covered:
Amazon Comprehend
AWS Deep Learning AMIs (DLAMI)
AWS DeepLens
Amazon Forecast
Amazon Fraud Detector
Amazon Lex
Amazon Polly
Amazon Rekognition
Amazon SageMaker
Amazon Textract
Amazon Transcribe
Amazon Translate
Other Services and topics covered are:
Ingestion/Collection
Processing/ETL
Data analysis/visualization
Model training
Model deployment/inference
Operational
AWS ML application services
Language relevant to ML (for example, Python, Java, Scala, R, SQL)
Notebooks and integrated development environments (IDEs),
S3, SageMaker, Kinesis, Lake Formation, Athena, Kibana, Redshift, Textract, EMR, Glue, SageMaker, CSV, JSON, IMG, parquet or databases, Amazon Athena
Amazon EC2, Amazon Elastic Container Registry (Amazon ECR), Amazon Elastic Container Service, Amazon Elastic Kubernetes Service , Amazon Redshift
Important: To succeed with the real exam, do not memorize the answers in this app. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
Note and disclaimer: We are not affiliated with Microsoft or Azure or Google or Amazon. The questions are put together based on the certification study guide and materials available online. The questions in this app should help you pass the exam but it is not guaranteed. We are not responsible for any exam you did not pass.

Download AWS machine Learning Specialty Exam Prep App on iOs
Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon
- [D] (Interview question) Comparing two models with and without negative sampling but same AUC and logloss on the test dataset: which model is better?by /u/mayasang (Machine Learning) on May 29, 2023 at 9:52 am
Hi, I've recently gotten this question at a tech company during a ML interview. Let's say we built a classifier that predicts users' certain actions (e.g., clicks on ads). (1) How do we evaluate this model (assuming that it's a heavily imbalanced dataset) - I mentioned that we can use AUC and normalized cross entropy. (Definition: the average log loss per impression divided by what the average log loss per impression would be if a model predicted the background click through rate (CTR) for every impression [1]). As a follow-up question, the interviewer asked, (2) If we have two models: Model1 trained on orignal data without sampling: AUC1, logloss1 on eval data (non-sampled) Model2 trained on 10% neg-downampled data: AUC2, logloss2 on eval data (non-sampled) If their AUC1 == AUC2, and logloss1 == logloss2, which metric implies that the model is better? Which metric should we look at? Which model is better? I mentioned that if the test dataset isn't downsampled, and if their AUC and cross entropy are the same, the two models' quality seem to be the same. I'm not sure if this was the correct answer, but I wasn't sure if I was missing anything and the interviewer didn't give any feedback on my answer. What do you think? Thanks for the insight in advance! [1] Practical Lessons from Predicting Clicks on Ads at Facebook, ADKDD 14 submitted by /u/mayasang [link] [comments]
- [D] [LoRA + weight merge every N step] for pre-training?by /u/kkimdev (Machine Learning) on May 29, 2023 at 9:13 am
I was wondering if we can use LoRA for pre-training, by merging LoRA weights with the frozen weights every N step. Or is there a similar pre-training research? submitted by /u/kkimdev [link] [comments]
- [D] ARR scores vs START softconf scoresby /u/Loose-Research-3105 (Machine Learning) on May 29, 2023 at 7:59 am
How do the scores in the ARR results compare to the scores in softconf START? Can we consider the scores we received in this ARR to be comparable to the scores we would have received from a direct submission to START (*such as EMNLP)? submitted by /u/Loose-Research-3105 [link] [comments]
- Real-time Trash Object Detection Web App [P]by /u/thelazyaz (Machine Learning) on May 29, 2023 at 5:57 am
submitted by /u/thelazyaz [link] [comments]
- [R] List of SOTA models/architectures in Machine Learningby /u/SwaroopMeher (Machine Learning) on May 29, 2023 at 5:38 am
Hello, is there any comprehensive list of the latest SOTA models or architectures in mainstream tasks of AI? If none, I request you to share a few you know here in the comments. With so many models out there, it's hard find the best for a given task at hand. I would highly appreciate if you could share this info. Need it for my research. P.S. I know the question is too vague by mentioning "AI". I just want to collect as many tasks and their respective SOTA models as possible. submitted by /u/SwaroopMeher [link] [comments]
- Project[P]by Machine Learning on May 29, 2023 at 5:03 am
Hey, I am new to machine learning, I am predicting child malnutrition in rural populations worldwide using a decision tree. Could you please guide me on how to extract the database from so many Excel files available on Kaggle in complex features to the database of my purpose i.e. for the rural population. it would be great if you help me out. [link] [comments]
- [P] Does anyone have the dataset called Recipe 1M+, or smth for inverse cooking?by /u/IntelligentUse5990 (Machine Learning) on May 29, 2023 at 3:51 am
Needed urgent, but the old links say " Internal error occurred" submitted by /u/IntelligentUse5990 [link] [comments]
- [N] Nvidia ACE Brings AI to Game Characters, Allows Lifelike Conversationsby /u/geekinchief (Machine Learning) on May 29, 2023 at 3:39 am
submitted by /u/geekinchief [link] [comments]
- [D] Understanding - Understanding Diffusion Models: A Unified Perspectiveby /u/flerakml (Machine Learning) on May 29, 2023 at 12:16 am
I am trying to parse the very comprehensive paper by Calvin Luo https://arxiv.org/pdf/2208.11970.pdf. Can anyone mathematically show how to go from equation (43) -> (45) using equations of expectations and PGMs? I need help understanding where the variables disappear in the expectations. submitted by /u/flerakml [link] [comments]
- [D] Teaching the Intuition Behind NNsby /u/__data_cactus__ (Machine Learning) on May 28, 2023 at 10:05 pm
Hello all, I have been teaching Machine Learning for a few years now and I wrote an article about the process I use for my training courses with classes of IT professionals. It's about the strategy I use to build intuitions on NNs in a short time (without the need of a CS math course) and while it's mainly geared towards educators in this space, I think many of you would enjoy the read. Let me know what you think! 😀 https://medium.com/@matei.simtinica/how-i-teach-the-intuition-behind-neural-networks-d7b7ca418873 submitted by /u/__data_cactus__ [link] [comments]
- [P] Introducing Model Lab - A new tool to make sense of training LLMsby /u/CS-fan-101 (Machine Learning) on May 28, 2023 at 9:54 pm
submitted by /u/CS-fan-101 [link] [comments]
- [P] Genetic Algorithm gots stuck - Variation of Nurses problemby /u/BlackLands123 (Machine Learning) on May 28, 2023 at 7:17 pm
Hi guys, I am writing this post to ask for your help on a problem (variation of the Nurse scheduling problem) I am trying to solve using the genetic algorithm. My problem is as follows: I need to automatically generate rosters for a team consisting of a certain number of people. Each person has a different employment contract that includes a different number of working hours per week and a different number of days off. Since I am still at the start point, I set as my initial goal to assign each person a number of work hours per week equal to those in his or her contract. Each individual in the population consists of a binary vector of length equal to: 7 (number of days in the week) * 8 (number of hours the store is open each day) * N (number of people in the team). This vector will represent the weekly work shifts of each team member. If to the array position of index 1 and 2 the algorithm assigns the value 11, it means that the first person on Monday will work from 9 to 10 (each bit is one hour of work) and so on. The algorithm is also passed an array indicating the number of hours per week each person must work, e.g., [40,40,32,32] means that the first person must work 40 hours, the second 40, and so on. My problem is that the algorithm always stays fixed on a solution while varying the mutation probability or population size. In the case of a team consisting of 4 members who will have to work 40,40, 32 and 32 hours respectively, the algorithm assigns all members a pool of 36 hours. Below I show part of the code for the fitness function (I used Python and the library DEAP), in which I assign a penalty proportional to how far the solution is from the correct value of hours per week that each employee must work. def countWeeklyHoursViolations(self, employeeShiftDict): """ :param employeeShiftDict: a dictionary of employee shifts containing the employee name as the key and the weekly shift as the value :return: the number of weekly hours violations """ # simulated annealing will try to minimize this function weeklyHoursViolation = 0 for employee in self.employees: weekly_hours_calculated = sum(employeeShiftDict[employee]) weekly_hours_expected = self.weeklyHours[self.employees.index(employee)] # sum the squared difference between the expected and calculated weekly hours - higher penalty for higher difference weeklyHoursViolation += self.hardConstraintPenalty * abs(weekly_hours_calculated - weekly_hours_expected)**2 return weeklyHoursViolation What do you recommend that I do? Do you have any ideas? I thank you in advance! submitted by /u/BlackLands123 [link] [comments]
- [R] UMat: Uncertainty-Aware Single Image High Resolution Material Captureby /u/crp1994 (Machine Learning) on May 28, 2023 at 4:15 pm
https://i.redd.it/rhzc83xfkl2b1.gif submitted by /u/crp1994 [link] [comments]
- Are AI developers not paying enough attention to this? [D]by /u/ThePanArchitect (Machine Learning) on May 28, 2023 at 3:53 pm
First of all, excuse my lack of proper terminology and technical knowledge about the matter, AI and IT are not my fields of expertise but I'm an architect with huge enthusiasm for AI and enjoy discussing with those in the field. so my question is Why don't we see more AI development to generate Architectural buildings in terms of walls, doors, windows and all other elements that constitute a building... hear me out. The integration of AI in architecture has been intensively discussed if not already taking place. However, from my outlook, it seems to be achieved on a relatively superficial level. i.e. through image generation using text prompts such as Midjourney or ControlNET. However, I have yet to see a tool or a model that truly can understand geometry or 3D shapes. Even though geometry can, technically speaking, be represented via text or mathematical formulas for more complex surfaces and shapes. and if geometry can be converted into text, it can be understood and pre-trained, correct? Already an excellent research paper stated a proof of concept on such an idea, the paper is called "Architext" and I think that digging deeper into this idea of representing geometry into text, representing walls, windows, doors, etc into text or any other format that can be pre-trained will definitely hit a spot. Perhaps a wall can be represented by a tuple such as: (baselineL1[Startpoint(x1,y1),Endpoint(x2,y2)], thickness=250 mm, height=2800) In fact, there actually is a file format called IFC which is basically a conversion of entire an BIM into text. Maybe that IFC can be used as the "Training set"? submitted by /u/ThePanArchitect [link] [comments]
- [P] Sophia (Programmed-out)by /u/Sleepin-tiger4 (Machine Learning) on May 28, 2023 at 3:31 pm
Stanford released a remarkable new second order optimizer known as Sophia which uses estimator and utilises clipping mechanism. According to the paper, It is 100K steps more efficient and takes significantly less wall-clock time to compute. The paper is amazing and a milestone at least according to me. They did not provide any code but provided pseudocode and Algorithm to program the optimizer. I find it helpful programming or either understanding the code rather than just reading the literature itself even its pseudocode. Which is why, I took the time to write a function that utilises the Optimizer. If you're interested what hyper params they used it's very much clear in their paper and they also mentioned to get the hyper-params for sophia using a grid search and based on AdamW and Lion's param choices. It is very fast project so I was only able to write the code in very basic way no pytorch or jax whatsoever. I am optimistic to add a training script and few nifty features. That's not until a few weeks. I personally think reading the code and learning Sophia will be very helpful and for many it can provide a new research direction (maybe for your thesis as well). I have adding the github link to my code. Contribution: Roma wasn't built by itself. If you think you have something to offer feel free to contribute to the repository. It'll help others to learn. And you as well. And if you have found my work interesting or helpful consider giving a star, it helps the repository being visible to many people and kinda motivates me to consider providing updates and cool stuff with a project. Otherwise, here's the GitHub code and Paper Link GitHub code: https://github.com/sleepingcat4/Sophia Paper Link: https://arxiv.org/abs/2305.14342 submitted by /u/Sleepin-tiger4 [link] [comments]
- [P] Historical Tidbits about Transformers: About LayerNorm Variants in the Original Transformer Paper & Schmidhuber's Fast Weight Programmers from the 1990'sby /u/seraschka (Machine Learning) on May 28, 2023 at 2:39 pm
submitted by /u/seraschka [link] [comments]
- [D] TCG card recognizer appby /u/Levissie (Machine Learning) on May 28, 2023 at 9:13 am
Hi all. I came across this app that recognizes trading cards. I am curious what methods they used to implement it. What do you think they used/what would be a good method to implement this type of functionality? E.g., would classification solely on the image work here, or would it be a good strategy to first perform text extraction, and then use the text for performing classification? Any insights/ideas are welcome! submitted by /u/Levissie [link] [comments]
- [P] GirlfriendGPT - build your own AI girlfriendby /u/Yajirobe404 (Machine Learning) on May 28, 2023 at 9:03 am
submitted by /u/Yajirobe404 [link] [comments]
- [P] Plakakia (tiles in Greek) is an image tiling library I made for quickly generating tiles from images. It would be great if people try it and give some feedback / raise issues on github. It's the first open-source library I ever made, so hopefully I learn from more experienced people.by /u/kalfasyan (Machine Learning) on May 28, 2023 at 7:15 am
submitted by /u/kalfasyan [link] [comments]
- Uncensored models, fine-tuned without artificial moralizing, such as “Wizard-Vicuna-13B-Uncensored-HF” performs well at LLM eval benchmarks even when compared with larger 65B, 40B, 30B models. Has there been any studies about how censorship handicaps a model’s capabilities?by /u/hardmaru (Machine Learning) on May 28, 2023 at 4:03 am
submitted by /u/hardmaru [link] [comments]
- [D] (Interview question) What happens if we add L3 term to a logistic regression model?by /u/mayasang (Machine Learning) on May 28, 2023 at 3:21 am
Hi, I've recently gotten this question during an interview with a tech company. I answered that it'd have more dramatic effect that L2 term has, making the weight coefficient even smaller. The interviewer said that there is even more important aspect to it: it now makes the problem non-convex because the third order function is no longer convex function. Can anyone elaborate on this explanation further? Does adding L3 term with the log-likelihood also make the cost function non-convex? I tried asking this Google and ChatGPT, and ChatGPT says that the logistic regression model still remains convex: In logistic regression, the objective function is typically a log-likelihood function that is maximized or, equivalently, a negative log-likelihood function that is minimized. When regularization is added, the regularization term is added to the negative log-likelihood to create the regularized objective function. The addition of L3 regularization does not introduce non-convexity. The convexity of the logistic regression model with L3 regularization can be proven mathematically by analyzing the Hessian matrix of the objective function. The Hessian matrix is positive semi-definite, which confirms convexity. So, even with the inclusion of an L3 regularization term, the logistic regression model remains convex, and convex optimization techniques can be used to find the optimal solution efficiently. submitted by /u/mayasang [link] [comments]
- [R] Using LLMs for multi-hop document reranking with only a few examples.by /u/moyle (Machine Learning) on May 28, 2023 at 2:57 am
Short summary: Use LLMs to rank a given set of documents based on the likelihood of the question given the documents— shows comparable performance to fully-supervised retrieval systems. Arxiv: https://arxiv.org/abs/2205.12650 Github: https://github.com/mukhal/PromptRank submitted by /u/moyle [link] [comments]
- [N] ChatGPT Plugins Open Security Holes From PDFs, Websitesby /u/geekinchief (Machine Learning) on May 27, 2023 at 5:25 pm
submitted by /u/geekinchief [link] [comments]
- [R] Improving Factuality and Reasoning in Language Models through Multiagent Debateby /u/BidImpossible555 (Machine Learning) on May 27, 2023 at 1:53 pm
submitted by /u/BidImpossible555 [link] [comments]
- [D] What Evaluation Metrics that actually matters ?by /u/MohamedRashad (Machine Learning) on May 27, 2023 at 11:09 am
I keep reading about open source LLMs that is on par with ChatGPT and GPT-4 but when i try them i find them far away from OpenAI's models. The best metric i found aligning with my findings was the ELO Rating by lmsys (the authors of Vicuna). What other metrics are used to truly evaluate LLMs and give us authentic numbers about their capabilities ? submitted by /u/MohamedRashad [link] [comments]
- Create high-quality images with Stable Diffusion models and deploy them cost-efficiently with Amazon SageMakerby Simon Zamarin (AWS Machine Learning Blog) on May 26, 2023 at 5:57 pm
Text-to-image generation is a task in which a machine learning (ML) model generates an image from a textual description. The goal is to generate an image that closely matches the description, capturing the details and nuances of the text. This task is challenging because it requires the model to understand the semantics and syntax of
- Build a powerful question answering bot with Amazon SageMaker, Amazon OpenSearch Service, Streamlit, and LangChainby Amit Arora (AWS Machine Learning Blog) on May 25, 2023 at 4:46 pm
One of the most common applications of generative AI and large language models (LLMs) in an enterprise environment is answering questions based on the enterprise’s knowledge corpus. Amazon Lex provides the framework for building AI based chatbots. Pre-trained foundation models (FMs) perform well at natural language understanding (NLU) tasks such summarization, text generation and question
- Get insights on your user’s search behavior from Amazon Kendra using an ML-powered serverless stackby Genta Watanabe (AWS Machine Learning Blog) on May 25, 2023 at 3:38 pm
Amazon Kendra is a highly accurate and intelligent search service that enables users to search unstructured and structured data using natural language processing (NLP) and advanced search algorithms. With Amazon Kendra, you can find relevant answers to your questions quickly, without sifting through documents. However, just enabling end-users to get the answers to their queries
- How OCX Cognition reduced ML model development time from weeks to days and model update time from days to real time using AWS Step Functions and Amazon SageMakerby Brian Curry (AWS Machine Learning Blog) on May 25, 2023 at 3:24 pm
This post was co-authored by Brian Curry (Founder and Head of Products at OCX Cognition) and Sandhya MN (Data Science Lead at InfoGain) OCX Cognition is a San Francisco Bay Area-based startup, offering a commercial B2B software as a service (SaaS) product called Spectrum AI. Spectrum AI is a predictive (generative) CX analytics platform for
- Dialogue-guided intelligent document processing with foundation models on Amazon SageMaker JumpStartby Alfred Shen (AWS Machine Learning Blog) on May 24, 2023 at 4:50 pm
Intelligent document processing (IDP) is a technology that automates the processing of high volumes of unstructured data, including text, images, and videos. IDP offers a significant improvement over manual methods and legacy optical character recognition (OCR) systems by addressing challenges such as cost, errors, low accuracy, and limited scalability, ultimately leading to better outcomes for
- Automate document validation and fraud detection in the mortgage underwriting process using AWS AI services: Part 1by Anup Ravindranath (AWS Machine Learning Blog) on May 24, 2023 at 4:19 pm
In this three-part series, we present a solution that demonstrates how you can automate detecting document tampering and fraud at scale using AWS AI and machine learning (ML) services for a mortgage underwriting use case. This solution rides on a more significant global wave of increasing mortgage fraud, which is worsening as more people present
- Perform batch transforms with Amazon SageMaker Jumpstart Text2Text Generation large language modelsby Hemant Singh (AWS Machine Learning Blog) on May 24, 2023 at 4:13 pm
Today we are excited to announce that you can now perform batch transforms with Amazon SageMaker JumpStart large language models (LLMs) for Text2Text Generation. Batch transforms are useful in situations where the responses don’t need to be real time and therefore you can do inference in batch for large datasets in bulk. For batch transform,
- Index your Confluence content using the new Confluence connector V2 for Amazon Kendraby Ashish Lagwankar (AWS Machine Learning Blog) on May 23, 2023 at 8:23 pm
Amazon Kendra is a highly accurate and simple-to-use intelligent search service powered by machine learning (ML). Amazon Kendra offers a suite of data source connectors to simplify the process of ingesting and indexing your content, wherever it resides. Valuable data in organizations is stored in both structured and unstructured repositories. An enterprise search solution should
- Accelerate machine learning time to value with Amazon SageMaker JumpStart and PwC’s MLOps acceleratorby Vik Pant (AWS Machine Learning Blog) on May 23, 2023 at 8:18 pm
This is a guest blog post co-written with Vik Pant and Kyle Bassett from PwC. With organizations increasingly investing in machine learning (ML), ML adoption has become an integral part of business transformation strategies. A recent PwC CEO survey unveiled that 84% of Canadian CEOs agree that artificial intelligence (AI) will significantly change their business
- Deploy generative AI models from Amazon SageMaker JumpStart using the AWS CDKby Hantzley Tauckoor (AWS Machine Learning Blog) on May 23, 2023 at 8:08 pm
The seeds of a machine learning (ML) paradigm shift have existed for decades, but with the ready availability of virtually infinite compute capacity, a massive proliferation of data, and the rapid advancement of ML technologies, customers across industries are rapidly adopting and using ML technologies to transform their businesses. Just recently, generative AI applications have
- Instruction fine-tuning for FLAN T5 XL with Amazon SageMaker Jumpstartby Laurent Callot (AWS Machine Learning Blog) on May 22, 2023 at 5:38 pm
Generative AI is in the midst of a period of stunning growth. Increasingly capable foundation models are being released continuously, with large language models (LLMs) being one of the most visible model classes. LLMs are models composed of billions of parameters trained on extensive corpora of text, up to hundreds of billions or even a
- [D] Simple Questions Threadby /u/AutoModerator (Machine Learning) on May 21, 2023 at 3:00 pm
Please post your questions here instead of creating a new thread. Encourage others who create new posts for questions to post here instead! Thread will stay alive until next one so keep posting after the date in the title. Thanks to everyone for answering questions in the previous thread! submitted by /u/AutoModerator [link] [comments]
- Introducing an image-to-speech Generative AI application using Amazon SageMaker and Hugging Faceby Jack Marchetti (AWS Machine Learning Blog) on May 19, 2023 at 6:11 pm
Vision loss comes in various forms. For some, it’s from birth, for others, it’s a slow descent over time which comes with many expiration dates: The day you can’t see pictures, recognize yourself, or loved ones faces or even read your mail. In our previous blogpost Enable the Visually Impaired to Hear Documents using Amazon
- Announcing the updated Microsoft SharePoint connector (V2.0) for Amazon Kendraby Udaya Jaladi (AWS Machine Learning Blog) on May 18, 2023 at 5:25 pm
Amazon Kendra is a highly accurate and simple-to-use intelligent search service powered by machine learning (ML). Amazon Kendra offers a suite of data source connectors to simplify the process of ingesting and indexing your content, wherever it resides. Valuable data in organizations is stored in both structured and unstructured repositories. Amazon Kendra can pull together
- Build a serverless meeting summarization backend with large language models on Amazon SageMaker JumpStartby Eric Kim (AWS Machine Learning Blog) on May 17, 2023 at 7:04 pm
AWS delivers services that meet customers’ artificial intelligence (AI) and machine learning (ML) needs with services ranging from custom hardware like AWS Trainium and AWS Inferentia to generative AI foundation models (FMs) on Amazon Bedrock. In February 2022, AWS and Hugging Face announced a collaboration to make generative AI more accessible and cost efficient. Generative
- Prepare training and validation dataset for facies classification using Snowflake integration and train using Amazon SageMaker Canvasby Nick McCarthy (AWS Machine Learning Blog) on May 17, 2023 at 6:57 pm
This post is co-written with Thatcher Thornberry from bpx energy. Facies classification is the process of segmenting lithologic formations from geologic data at the wellbore location. During drilling, wireline logs are obtained, which have depth-dependent geologic information. Geologists are deployed to analyze this log data and determine depth ranges for potential facies of interest from
- GPT-NeoXT-Chat-Base-20B foundation model for chatbot applications is now available on Amazon SageMakerby Rachna Chadha (AWS Machine Learning Blog) on May 16, 2023 at 4:42 pm
Today we are excited to announce that Together Computer’s GPT-NeoXT-Chat-Base-20B language foundation model is available for customers using Amazon SageMaker JumpStart. GPT-NeoXT-Chat-Base-20B is an open-source model to build conversational bots. You can easily try out this model and use it with JumpStart. JumpStart is the machine learning (ML) hub of Amazon SageMaker that provides access
- Demand forecasting at Getir built with Amazon Forecastby Nafi Ahmet Turgut (AWS Machine Learning Blog) on May 15, 2023 at 5:53 pm
This is a guest post co-authored by Nafi Ahmet Turgut, Mutlu Polatcan, Pınar Baki, Mehmet İkbal Özmen, Hasan Burak Yel, and Hamza Akyıldız from Getir. Getir is the pioneer of ultrafast grocery delivery. The tech company has revolutionized last-mile delivery with its “groceries in minutes” delivery proposition. Getir was founded in 2015 and operates in
- Introducing Amazon Textract Bulk Document Uploader for enhanced evaluation and analysisby Shashwat Sapre (AWS Machine Learning Blog) on May 15, 2023 at 5:47 pm
Amazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, and data from any document or image. To make it simpler to evaluate the capabilities of Amazon Textract, we have launched a new Bulk Document Uploader feature on the Amazon Textract console that enables you to quickly process your own set of
- AI-powered code suggestions and security scans in Amazon SageMaker notebooks using Amazon CodeWhisperer and Amazon CodeGuruby Raj Pathak (AWS Machine Learning Blog) on May 12, 2023 at 8:29 pm
Amazon SageMaker comes with two options to spin up fully managed notebooks for exploring data and building machine learning (ML) models. The first option is fast start, collaborative notebooks accessible within Amazon SageMaker Studio—a fully integrated development environment (IDE) for machine learning. You can quickly launch notebooks in Studio, easily dial up or down the
- Unlock insights from your Amazon S3 data with intelligent searchby Rajesh Kumar Ravi (AWS Machine Learning Blog) on May 12, 2023 at 2:43 pm
Amazon Kendra is an intelligent search service powered by machine learning (ML). Amazon Kendra reimagines enterprise search for your websites and applications so your employees and customers can easily find the content they’re looking for, even when it’s scattered across multiple locations and content repositories within your organization. Keywords or natural language questions can be
- Reminder: Use the report button and read the rules!by /u/MTGTraner (Machine Learning) on March 24, 2023 at 9:32 am
submitted by /u/MTGTraner [link] [comments]
Download AWS machine Learning Specialty Exam Prep App on iOs

Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon
A Twitter List by enoumenDownload AWS machine Learning Specialty Exam Prep App on iOs
Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon