You can translate the content of this page by selecting a language in the select box.
The AWS Certified Machine Learning Specialty validates expertise in building, training, tuning, and deploying machine learning (ML) models on AWS.
Use this App to learn about Machine Learning on AWS and prepare for the AWS Machine Learning Specialty Certification MLS-C01.
Download AWS machine Learning Specialty Exam Prep App on iOs
Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon
[appbox appstore 1611045854-iphone screenshots]
[appbox microsoftstore 9n8rl80hvm4t-mobile screenshots]

Download AWS machine Learning Specialty Exam Prep App on iOs
Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon
The App provides hundreds of quizzes and practice exam about:
– Machine Learning Operation on AWS
– Modelling
– Data Engineering
– Computer Vision,
– Exploratory Data Analysis,
– ML implementation & Operations
– Machine Learning Basics Questions and Answers
– Machine Learning Advanced Questions and Answers
– Scorecard
– Countdown timer
– Machine Learning Cheat Sheets
– Machine Learning Interview Questions and Answers
– Machine Learning Latest News
The App covers Machine Learning Basics and Advanced topics including: NLP, Computer Vision, Python, linear regression, logistic regression, Sampling, dataset, statistical interaction, selection bias, non-Gaussian distribution, bias-variance trade-off, Normal Distribution, correlation and covariance, Point Estimates and Confidence Interval, A/B Testing, p-value, statistical power of sensitivity, over-fitting and under-fitting, regularization, Law of Large Numbers, Confounding Variables, Survivorship Bias, univariate, bivariate and multivariate, Resampling, ROC curve, TF/IDF vectorization, Cluster Sampling, etc.
Domain 1: Data Engineering
Create data repositories for machine learning.
Identify data sources (e.g., content and location, primary sources such as user data)
Determine storage mediums (e.g., DB, Data Lake, S3, EFS, EBS)
Identify and implement a data ingestion solution.
Data job styles/types (batch load, streaming)
Data ingestion pipelines (Batch-based ML workloads and streaming-based ML workloads), etc.
Domain 2: Exploratory Data Analysis
Sanitize and prepare data for modeling.
Perform feature engineering.
Analyze and visualize data for machine learning.
Domain 3: Modeling
Frame business problems as machine learning problems.
Select the appropriate model(s) for a given machine learning problem.
Train machine learning models.
Perform hyperparameter optimization.
Evaluate machine learning models.
Domain 4: Machine Learning Implementation and Operations
Build machine learning solutions for performance, availability, scalability, resiliency, and fault
tolerance.
Recommend and implement the appropriate machine learning services and features for a given
problem.
Apply basic AWS security practices to machine learning solutions.
Deploy and operationalize machine learning solutions.
Machine Learning Services covered:
Amazon Comprehend
AWS Deep Learning AMIs (DLAMI)
AWS DeepLens
Amazon Forecast
Amazon Fraud Detector
Amazon Lex
Amazon Polly
Amazon Rekognition
Amazon SageMaker
Amazon Textract
Amazon Transcribe
Amazon Translate
Other Services and topics covered are:
Ingestion/Collection
Processing/ETL
Data analysis/visualization
Model training
Model deployment/inference
Operational
AWS ML application services
Language relevant to ML (for example, Python, Java, Scala, R, SQL)
Notebooks and integrated development environments (IDEs),
S3, SageMaker, Kinesis, Lake Formation, Athena, Kibana, Redshift, Textract, EMR, Glue, SageMaker, CSV, JSON, IMG, parquet or databases, Amazon Athena
Amazon EC2, Amazon Elastic Container Registry (Amazon ECR), Amazon Elastic Container Service, Amazon Elastic Kubernetes Service , Amazon Redshift
Important: To succeed with the real exam, do not memorize the answers in this app. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.
Note and disclaimer: We are not affiliated with Microsoft or Azure or Google or Amazon. The questions are put together based on the certification study guide and materials available online. The questions in this app should help you pass the exam but it is not guaranteed. We are not responsible for any exam you did not pass.

Download AWS machine Learning Specialty Exam Prep App on iOs
Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon
- [P] methods of getting PyTorch or TensorFlow to work with AMD RX 580? Please read description for more info! Will be very active until resolved!by /u/9876123 (Machine Learning) on November 29, 2023 at 1:15 am
So far I have only had marginal success with the TensorFlow-directml python library, however, it seems to be awful and maybe has a memory issue (small model and data yet trying to allocate 5-19GB into GPU RAM or RAM). I have tried to use ROCm, but This has led me to constant deadends even with older versions and docker methods. At this point, I have no other ideas as to what I can do to get these programs to interact with my AMD RX580, and purchasing a new GPU is not an option. submitted by /u/9876123 [link] [comments]
- [D] What is the motivation for parameter-efficient fine tuning if there's no significant reduction in runtime or GPU memory usage?by /u/patricky168 (Machine Learning) on November 29, 2023 at 1:06 am
I'm been playing around with methods such as prompt tuning and LoRA, which are parameter efficient as they only fine-tune a very small fraction (that is, <1%) of all parameters. But for both methods, you have to cache the intermediate gradients during backprop, meaning that you don't save on GPU memory during fine-tuning (or at most a small amount of GPU memory saved, due to not having to store optimizer states for frozen layers). For instance, I've had LoRA reduce GPU memory footprint for my custom model from 8.5GB -> 8.1GB, which is very minimal. Fine-tuning time reduction also isn't really a major advantage, with finetuning the same model reduced by 20ms per batch, from 210ms to 190ms. This begs the question - what really is the practical reason for the popularity of parameter-efficient fine-tuning (e.g. prompt tuning w/ 1.6k+ citations) if it doesn't really save on GPU memory and training time? I can see two possible reasons (but I'm not really convinced they really explain the 'hype' around parameter-efficient fine tuning): The fine-tuned model checkpoint for the downstream task is very significantly reduced. For example, in prompt tuning, we only need to save the tiny trained soft prompt (~very few megabytes), rather than the entire changed model weights (~many, many GBs) on our hard disk/SSD. But from a practical point-of-view, I feel that most people suffer from a lack of compute (e.g. GPU memory) than hard disk space. In other words, it seems that training time and GPU memory consumption are more relevant concerns than saving on checkpoint storage space. The second is robustness to domain shifts (since we are preserving the majority of the original model's weights rather than destructively re-learning them), which was mentioned in the prompt tuning paper but not so much in the LoRA paper. I could see this as a possible reason, but the gains in performance in the prompt tuning paper in the out-of-distribution setting are marginal at best, and LoRA doesn't mention domain shifts. (EDIT - I'm also wondering if there is there something else I'm missing to decrease GPU memory and runtime? I've heard QLoRA which adds 4-bit quantization of the model on top of LoRA, so perhaps that's a way to tackle memory efficiency for LoRA. But I don't know if there's anything to reduce memory footprint for prompt tuning?) submitted by /u/patricky168 [link] [comments]
- [D] AI Video Avatar Open Sourceby /u/TernaryJimbo (Machine Learning) on November 28, 2023 at 11:59 pm
Does anyone know or have a guess at what are the machine learning models being used behind services like heygen.com or https://www.synthesia.io ? I'm curious if there are open source alternative models like there are for stable diffusion, I have seen things like SadTalker but thats just moving an image which just isn't the same quality as this. submitted by /u/TernaryJimbo [link] [comments]
- [P] Customising LSTM in Pythonby /u/FaithlessnessOk1255 (Machine Learning) on November 28, 2023 at 11:50 pm
Hi, Is there a way to customise the LSTM model in Python? I need to remove the forget gate and peephole connections, but I haven't found a way to implement this yet. I'm grateful for any help submitted by /u/FaithlessnessOk1255 [link] [comments]
- [Discussion] Knowledge distillation is badly definedby /u/Cosmolithe (Machine Learning) on November 28, 2023 at 9:58 pm
Or less provocatively, knowledge distillation is a family of techniques with no clear goal. In the literature, it seems to me there are two types of knowledge distillation goals that are usually considered: having a student model learn from a teacher model to reproduce the answers of the teacher on the teacher training dataset (very important) having a student model learn from a teacher model to reproduce the answers the teacher predicts on another downstream dataset, so neither the teacher nor the student "saw" those data points before However, I can imagine another objective that I find reasonable but doesn't seem to be explored much: having the student model copy the teacher outputs everywhere, that is, training the student to produce the same answers as the teacher for any data points. This could be implemented by setting the loss function as the integral over x for the mean squared difference between the predictions of the teacher and the student. I don't really know how would would solve this particular problem though, maybe using importance sampling? If anyone knows of a paper that tries (3), please post the link. Discussion Knowledge distillation has possibly one of these three objectives (or maybe even another). Depending on what objective is actually used to train the student, the results should vary greatly. Teacher trained on set A, student trained on set A In the case of (1), it isn't really surprising that the student won't outperform the teacher, given that the student is usually smaller or has a simpler architecture. The student cannot beat the teacher on his field, if the student is evaluated on the same test set as the teacher. However, the student might completely fail to match the teacher prediction on out-of-distributions (OOD) data, as it has never been trained on OOD samples. In these cases I am not sure whether the student might outperform the teacher or not. Teacher trained on set A, student trained on set B In the case of (2), I wouldn't find surprising that the student would outperform the teacher given that the new downstream distribution is usually quite different from the training distribution, so the teacher is never given a chance to learn from this new dataset, but the student is given both the previous information (through the teacher) and the new information (through the inputs, which become gradients, which become parameter updates). I am even tempted to say it would be surprising that a similarly-sized student would be beaten by the teacher in these conditions. That is why I think it would be better if we acknowledge this potential effect before being surprised that student model outperform teachers. This seems to be particularly important in this era of LLMs trained on other LLM generated data. It seems to me that people overestimate the effect of distillation, when in fact it might simply be transfer learning. The increase of performance would be attributed to the use of additional data, or in the selection mechanism for the generated data more than in the act of distilling. Teacher trained on set A, student trained on the entire input domain For (3), I imagine this is actually the truest form of knowledge distillation as the goal is really to copy a (very complex) function on the entire domain and not just on a small part. I guess this is what is called "model compression" in the literature, but I never saw the problem framed like that. This method should be robust to out-of-distribution samples in the sense that the student should make about the same predictions as the teacher. In other words, if the teacher is robust to OOD data, so should be the student. I expect the student to perform a bit better than the teacher for any given test set because the student should "smooth out" any abnormal (artifact) prediction made by the teacher and create a form of regularization. I might be wrong though. Feel free to post your thoughts on this. submitted by /u/Cosmolithe [link] [comments]
- [R] Dataset in a day. A clustering-based approach for fast dataset creationby /u/Dutchcheesehead (Machine Learning) on November 28, 2023 at 7:43 pm
https://medium.com/bumble-tech/dataset-in-a-day-7f369de3b178 submitted by /u/Dutchcheesehead [link] [comments]
- [D] Advice on switching into researchby /u/Odd-Distance-4439 (Machine Learning) on November 28, 2023 at 7:23 pm
I’m work as a mle and have a publication under my belt. I want to switch into research and work for deepminds. I need advice on 1) how to prepare for the interviews and 2) what is the difference in skills set needed to succeeded in research as compared to corporate. submitted by /u/Odd-Distance-4439 [link] [comments]
- [Discussion] What part of your job do you finding yourself wasting time on most each week?by /u/zero-true (Machine Learning) on November 28, 2023 at 6:52 pm
Not asking about procrastination, actual job duties. For me it has to be working on spreadsheets and presentations. What are the bottlenecks in your workflow? submitted by /u/zero-true [link] [comments]
- AMD GPU for machine learning [D]by /u/the_fabbest (Machine Learning) on November 28, 2023 at 6:44 pm
I am a CS student and I recently bought the components for my first gaming computer. I found a 6800 for a good price that is pretty good but I was wondering if there could be trouble with machine learning oriented libraries (and other related stuff) since I found out that NVIDIA gpus are more recommend for it. If so, can I still have decent results with the AMD GPU I have or should I change it? submitted by /u/the_fabbest [link] [comments]
- [P] minOFT: An Easy-to-Use PyTorch Library for Applying Orthogonal Fine-Tuning (OFT) to PyTorch Modelsby /u/0blue2brown (Machine Learning) on November 28, 2023 at 6:01 pm
Hi r/MachineLearning, I wanted to share my open-source implementation of a really interesting work I came across in my research on fine-tuning language models, orthogonal fine-tuning. Orthogonal fine-tuning (OFT) is a more robust, stable, and sample-efficient alternative to LoRA that was originally developed for fine-tuning diffusion models. While LoRA updates the pretrained weight matrix by adding a product of two low-rank matrices, OFT multiplies pretrained layer weights by a learnable orthogonal matrix to apply a constrained transformation. The authors of OFT recently showed that this approach (with a clever improvement known as butterfly OFT) also works well for vision transformers and language models. Inspired by minLoRA, I thought it would be great to have an minimal open-source repo to test out and compare OFT with LoRA when fine-tuning language models. It is also built on top of nanoGPT by Andrej Karpathy. The library is pip installable, and can be used generically with any PyTorch model (including Hugging Face models), just like minLoRA. Feedback and contributions are welcome! You can try it out below: https://github.com/alif-munim/minOFT submitted by /u/0blue2brown [link] [comments]
- [P] Alignment-as-code: Making LLM applications behave with Tanuki.by /u/Noddybear (Machine Learning) on November 28, 2023 at 5:37 pm
I'm a contributor to Tanuki, a project that allows you to declaratively define LLM behaviour using test-driven syntax in Python. By specifying the contract that an LLM has to fulfil as a test, it helps cut down on MLOps and enables you to align the behaviour of your model to your requirements using a standard dev-ops process. Additionally, these align statements facilitate automatic teacher-student model distillation to reduce cost and latency by up to 10x (see benchmarks). Any thoughts or feedback is much appreciated. submitted by /u/Noddybear [link] [comments]
- [R] Cross-Axis Transformer with 2d Rotary Embeddingsby /u/lilyerickson (Machine Learning) on November 28, 2023 at 4:34 pm
submitted by /u/lilyerickson [link] [comments]
- [D] Is there any pre-trained model for face recognition?by /u/Rare-Durian-2121 (Machine Learning) on November 28, 2023 at 3:44 pm
I want to make a face recognition function using Java language: Input two face images and output whether they are the same person. Is there any pre-trained model that can be used directly? I tried using opencv‘s histogram normalization method for recognition, but the accuracy was very poor and unacceptable. submitted by /u/Rare-Durian-2121 [link] [comments]
- [D] Machine Learning Engineer Salary Increase?by /u/Fluid-Pipe-2831 (Machine Learning) on November 28, 2023 at 3:06 pm
Hey all, I graduated from university last year and have been working in a company in Florida as a machine learning engineer. I make 76k a year. This company offers tuition reimbursement for masters degree. Typically, how much of a pay increase do you get after getting your masters? Follow up question: would getting a masters from an online university (I would still be working full time) be any less prestigious than going in person? Please, if you’re comfortable, would anyone mind also sharing their personal salary numbers straight out of college and how it’s progressed throughout their career? submitted by /u/Fluid-Pipe-2831 [link] [comments]
- [R] Robust Reinforcement Learning Is Not Safeby /u/ml_dnn (Machine Learning) on November 28, 2023 at 2:33 pm
https://blogs.ucl.ac.uk/steapp/2023/11/15/adversarial-attacks-robustness-and-generalization-in-deep-reinforcement-learning/ submitted by /u/ml_dnn [link] [comments]
- [P] Evaluate, monitor, and safeguard your LLM-based appsby /u/AsDivyansh (Machine Learning) on November 28, 2023 at 2:02 pm
For the last couple of months, I along with my team invested lots of effort into building a solution that can help users evaluate and monitor the performance of their LLM and AI apps. If you're a ChatGPT (or any other LLM :)) user and are integrating it into your apps, and if, by any chance, it ever happened to you that the outputs you received weren't exactly the ones you were hoping for... You should find this useful 😃 Today, we are releasing it publically and launched it on ProductHunt. I would be very thankful if you try it out and support the launch. 🙏 https://www.producthunt.com/posts/deepchecks-llm-evaluation?r=h submitted by /u/AsDivyansh [link] [comments]
- [R] How to Bridge the Gap between Modalities: A Comprehensive Survey on Multimodal Large Language Modelby /u/APaperADay (Machine Learning) on November 28, 2023 at 1:18 pm
Paper: https://arxiv.org/abs/2311.07594 Abstract: This review paper explores Multimodal Large Language Models (MLLMs), which integrate Large Language Models (LLMs) like GPT-4 to handle multimodal data such as text and vision. MLLMs demonstrate capabilities like generating image narratives and answering image-based questions, bridging the gap towards real-world human-computer interactions and hinting at a potential pathway to artificial general intelligence. However, MLLMs still face challenges in processing the semantic gap in multimodality, which may lead to erroneous generation, posing potential risks to society. Choosing the appropriate modality alignment method is crucial, as improper methods might require more parameters with limited performance improvement. This paper aims to explore modality alignment methods for LLMs and their existing capabilities. Implementing modality alignment allows LLMs to address environmental issues and enhance accessibility. The study surveys existing modal alignment methods in MLLMs into four groups: (1) Multimodal Converters that change data into something LLMs can understand; (2) Multimodal Perceivers to improve how LLMs perceive different types of data; (3) Tools Assistance for changing data into one common format, usually text; and (4) Data-Driven methods that teach LLMs to understand specific types of data in a dataset. This field is still in a phase of exploration and experimentation, and we will organize and update various existing research methods for multimodal information alignment. https://preview.redd.it/hoa7lf52a33c1.png?width=1149&format=png&auto=webp&s=0a6230b350a0189fbfbdeaec719380c73c6403cd submitted by /u/APaperADay [link] [comments]
- [D] Is there a good Arabic model for open-domain dialogue?by /u/theonewhoask11 (Machine Learning) on November 28, 2023 at 1:15 pm
I really need it for a project. submitted by /u/theonewhoask11 [link] [comments]
- [D] NMT and unattended results - nllb-200 3.3Bby /u/Main_Path_4051 (Machine Learning) on November 28, 2023 at 1:11 pm
Hi, As a unittest, I made some translations from english to french using the little red ridding hood story. I don't really understand why the terms "Little Red Ridding Hood" is not translated the same term along the text. and sometimes inacurately translated here are some inputs/outputs : bad translations input en Little Red Riding Hood set out immediately to go to her grandmother, who lived in another village. output fr: Capuche Rouge se mit immédiatement en route pour aller voir sa grand-mère, qui vivait dans un autre village. input en Little Red Riding Hood, hearing the big voice of the wolf, was at first afraid; but believing her grandmother had a cold and was hoarse, answered, "It is your grandchild Little Red Riding Hood, who has brought you a cake and a little pot of butter mother sends you." output fr: Little Red Riding Hood, entendant la grande voix du loup, a d'abord peur, mais croyant sa grand-mère avait un rhume et était enroué, a répondu: "C'est votre petit-fils Little Red Riding Hood, qui vous a apporté un gâteau et un petit pot de beurre mère vous envoie. " Accurates translations : input en Little Red Riding Hood pulled the bobbin, and the door opened. output fr: Le Petit Chaperon Rouge a tiré la bobine et la porte s'est ouverte. input en "Your grandchild, Little Red Riding Hood," replied the wolf, counterfeiting her voice; "who has brought you a cake and a little pot of butter sent you by mother." output fr: "Votre petite-fille, le Petit Chaperon Rouge, répondit le loup, imitant sa voix; qui vous a apporté un gâteau et un petit pot de beurre que vous a envoyé votre mère". submitted by /u/Main_Path_4051 [link] [comments]
- [D] Branching Submodel in Keras and Tensorflowby /u/work_account_mp (Machine Learning) on November 28, 2023 at 10:04 am
Let's say I have a model, which has 2 inputs. The first input is a number, and the second input is another number, which in reality is a class. This model is split into 2 submodels. First sub-model works on the input, and the second sub-model works on the output of the first sub-model. The value of the first input will very greatly by the output of the second. Thus, I wish to be able to have multiple candidates of the first sub-model, and dynamically select which one to use at each step, both during training and inference, based on the value (class) of the second input. I did not manage to achieve this. I tried using the tf.cond, the tf.switch_case and several other things, but I never managed. When I asked chat GPT it said I should be using PyTorch for this. Is there really no way to do this ? submitted by /u/work_account_mp [link] [comments]
- Adjusting Probability distribution Using Speculative Decoding [D]by /u/1azytux (Machine Learning) on November 28, 2023 at 9:07 am
Hi, as Speculative Decoding runs a small model and a large model at the same time with a sampler in between, but in this instance the sampler's job is to NOT skew the probability distributions while doing so. There's a fairly simple python implementation of this idea here. Is there a way we can adjust the probability distributions of either the small model or the large model for the task of generation for creative writing? submitted by /u/1azytux [link] [comments]
- [D] NeurIPS 2023 Institutions Rankingby /u/Roland31415 (Machine Learning) on November 28, 2023 at 6:18 am
submitted by /u/Roland31415 [link] [comments]
- [R] SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Renderingby /u/Successful-Western27 (Machine Learning) on November 28, 2023 at 2:37 am
Computer vision researchers developed a way to create detailed 3D models from images in just minutes on a single GPU. Their method, called SuGaR, works by optimizing millions of tiny particles to match images of a scene. The key innovation is getting the particles to align to surfaces so they can be easily turned into a mesh. Traditionally 3D modeling is slow and resource heavy. Laser scans are unwieldy. Photogrammetry point clouds lack detail. And neural radiance fields like NeRF produce amazing renders but optimizing them into meshes takes hours or days even with beefy hardware. The demand for easier 3D content creation keeps growing for VR/AR, games, education, etc. But most techniques have big speed, quality, or cost limitations holding them back from mainstream use. This new SuGaR technique combines recent advances in neural scene representations and computational geometry to push forward state-of-the-art in accessible 3D reconstruction. It starts by leveraging a method called Gaussian Splatting that basically uses tons of tiny particles to replicate a scene. Getting the particles placed and configured only takes minutes. The catch is they don't naturally form a coherent mesh. SuGaR contributes a new initialization and training approach that aligns the particles with scene surfaces while keeping detail intact. This conditioning allows the particle cloud to be treated directly as a point cloud. They then apply a computational technique called Poisson Surface Reconstruction to directly build a mesh between the structured particles in a parallelized fashion. Handling millions of particles at once yields high fidelity at low latency. By moving the heavy lifting to the front-end point cloud structuring stage, SuGaR makes final mesh generation extremely efficient compared to other state-of-the-art neural/hybrid approaches. Experiments showed SuGaR can build detailed meshes faster than previous published techniques by orders of magnitude, while achieving competitive visual quality. The paper shares some promising examples of complex scenes reconstructed in under 10 minutes. There are still questions around handling more diverse scene types. But in terms of bringing high-quality 3D reconstruction closer to interactive speeds using accessible hardware, this looks like compelling progress. TLDR: Aligning particles from Gaussian Splatting lets you turn them into detailed meshes. Makes high-quality 3D better, faster, cheaper. Full summary is here. Paper site here. submitted by /u/Successful-Western27 [link] [comments]
- Introducing three new NVIDIA GPU-based Amazon EC2 instancesby Chetan Kapoor (AWS Machine Learning Blog) on November 27, 2023 at 11:14 pm
Amazon Elastic Compute Cloud (Amazon EC2) accelerated computing portfolio offers the broadest choice of accelerators to power your artificial intelligence (AI), machine learning (ML), graphics, and high performance computing (HPC) workloads. We are excited to announce the expansion of this portfolio with three new instances featuring the latest NVIDIA GPUs: Amazon EC2 P5e instances powered
- Boost inference performance for LLMs with new Amazon SageMaker containersby Michael Nguyen (AWS Machine Learning Blog) on November 27, 2023 at 8:06 pm
Today, Amazon SageMaker launches a new version (0.25.0) of Large Model Inference (LMI) Deep Learning Containers (DLCs) and adds support for NVIDIA’s TensorRT-LLM Library. With these upgrades, you can effortlessly access state-of-the-art tooling to optimize large language models (LLMs) on SageMaker and achieve price-performance benefits – Amazon SageMaker LMI TensorRT-LLM DLC reduces latency by 33%
- Simplify data prep for generative AI with Amazon SageMaker Data Wranglerby Ajjay Govindaram (AWS Machine Learning Blog) on November 27, 2023 at 7:49 pm
Generative artificial intelligence (generative AI) models have demonstrated impressive capabilities in generating high-quality text, images, and other content. However, these models require massive amounts of clean, structured training data to reach their full potential. Most real-world data exists in unstructured formats like PDFs, which requires preprocessing before it can be used effectively. According to IDC,
- [D] AISTATS 2024 Paper Reviewsby /u/zy415 (Machine Learning) on November 27, 2023 at 6:41 pm
AISTATS 2024 paper reviews are supposed to be out today. I thought to create a discussion thread for us to discuss any issue/complain/celebration or anything else. There is so much noise in the reviews every year. Some good work that the authors are proud of might get a low score because of the noisy system, given that AISTATS is growing large these years. We should keep in mind that the work is still valuable no matter what the score is. submitted by /u/zy415 [link] [comments]
- Democratize ML on Salesforce Data Cloud with no-code Amazon SageMaker Canvasby Daryl Martis (AWS Machine Learning Blog) on November 27, 2023 at 5:03 pm
This post is co-authored by Daryl Martis, Director of Product, Salesforce Einstein AI. This is the third post in a series discussing the integration of Salesforce Data Cloud and Amazon SageMaker. In Part 1 and Part 2, we show how the Salesforce Data Cloud and Einstein Studio integration with SageMaker allows businesses to access their
- [D] Do you obsessively watch your models train?by /u/TehDing (Machine Learning) on November 27, 2023 at 4:39 pm
I find myself watching tensorboard more than working- just wondering if others who have fallen into this pattern have words of advice wrt productivity submitted by /u/TehDing [link] [comments]
- AWS AI services enhanced with FM-powered capabilitiesby Bratin Saha (AWS Machine Learning Blog) on November 27, 2023 at 4:03 pm
Artificial intelligence (AI) continues to transform how we do business and serve our customers. AWS offers a range of pre-trained AI services that provide ready-to-use intelligence for your applications. In this post, we explore the new AI service capabilities and how they are enhanced using foundation models (FMs). We focus on the following major updates
- Elevate your self-service assistants with new generative AI features in Amazon Lexby Anuradha Durfee (AWS Machine Learning Blog) on November 27, 2023 at 4:26 am
In this post, we talk about how generative AI is changing the conversational AI industry by providing new customer and bot builder experiences, and the new features in Amazon Lex that take advantage of these advances. As the demand for conversational AI continues to grow, developers are seeking ways to enhance their chatbots with human-like
- Amazon Transcribe announces a new speech foundation model-powered ASR system that expands support to over 100 languagesby Sumit Kumar (AWS Machine Learning Blog) on November 26, 2023 at 7:42 pm
Amazon Transcribe is a fully managed automatic speech recognition (ASR) service that makes it straightforward for you to add speech-to-text capabilities to your applications. Today, we are happy to announce a next-generation multi-billion parameter speech foundation model-powered system that expands automatic speech recognition to over 100 languages. In this post, we discuss some of the
- Drive hyper-personalized customer experiences with Amazon Personalize and generative AIby Jingwen Hu (AWS Machine Learning Blog) on November 26, 2023 at 7:41 pm
Today, we are excited to announce three launches that will help you enhance personalized customer experiences using Amazon Personalize and generative AI. Whether you’re looking for a managed solution or build your own, you can use these new capabilities to power your journey. Amazon Personalize is a fully managed machine learning (ML) service that makes
- Build brand loyalty by recommending actions to your users with Amazon Personalize Next Best Actionby Shreeya Sharma (AWS Machine Learning Blog) on November 26, 2023 at 7:39 pm
Amazon Personalize is excited to announce the new Next Best Action (aws-next-best-action) recipe to help you determine the best actions to suggest to your individual users that will enable you to increase brand loyalty and conversion. Amazon Personalize is a fully managed machine learning (ML) service that makes it effortless for developers to deliver highly
- Accelerating AI/ML development at BMW Group with Amazon SageMaker Studioby Marc Neumann (AWS Machine Learning Blog) on November 24, 2023 at 5:49 pm
This post is co-written with Marc Neumann, Amor Steinberg and Marinus Krommenhoek from BMW Group. The BMW Group – headquartered in Munich, Germany – is driven by 149,000 employees worldwide and manufactures in over 30 production and assembly facilities across 15 countries. Today, the BMW Group is the world’s leading manufacturer of premium automobiles and
- Automating product description generation with Amazon Bedrockby Dhaval Shah (AWS Machine Learning Blog) on November 24, 2023 at 5:36 pm
In today’s ever-evolving world of ecommerce, the influence of a compelling product description cannot be overstated. It can be the decisive factor that turns a potential visitor into a paying customer or sends them clicking off to a competitor’s site. The manual creation of these descriptions across a vast array of products is a labor-intensive
- Optimizing costs for Amazon SageMaker Canvas with automatic shutdown of idle appsby Davide Gallitelli (AWS Machine Learning Blog) on November 24, 2023 at 5:30 pm
Amazon SageMaker Canvas is a rich, no-code Machine Learning (ML) and Generative AI workspace that has allowed customers all over the world to more easily adopt ML technologies to solve old and new challenges thanks to its visual, no-code interface. It does so by covering the ML workflow end-to-end: whether you’re looking for powerful data
- How SnapLogic built a text-to-pipeline application with Amazon Bedrock to translate business intent into actionby Greg Benson (AWS Machine Learning Blog) on November 24, 2023 at 5:28 pm
This post was co-written with Greg Benson, Chief Scientist; Aaron Kesler, Sr. Product Manager; and Rich Dill, Enterprise Solutions Architect from SnapLogic. Many customers are building generative AI apps on Amazon Bedrock and Amazon CodeWhisperer to create code artifacts based on natural language. This use case highlights how large language models (LLMs) are able to
- Amazon EC2 DL2q instance for cost-efficient, high-performance AI inference is now generally availableby A K Roy (AWS Machine Learning Blog) on November 22, 2023 at 11:54 pm
This is a guest post by A.K Roy from Qualcomm AI. Amazon Elastic Compute Cloud (Amazon EC2) DL2q instances, powered by Qualcomm AI 100 Standard accelerators, can be used to cost-efficiently deploy deep learning (DL) workloads in the cloud. They can also be used to develop and validate performance and accuracy of DL workloads that
- Your guide to generative AI and ML at AWS re:Invent 2023by Denis V. Batalov (AWS Machine Learning Blog) on November 22, 2023 at 10:47 pm
Yes, the AWS re:Invent season is upon us and as always, the place to be is Las Vegas! You marked your calendars, you booked your hotel, and you even purchased the airfare. Now all you need is some guidance on generative AI and machine learning (ML) sessions to attend at this twelfth edition of re:Invent. And although generative AI has appeared in previous events, this year we’re taking it to the next level. In addition to several exciting announcements during keynotes, most of the sessions in our track will feature generative AI in one form or another, so we can truly call our track “Generative AI and ML.” In this post, we give you a sense of how the track is organized and highlight a few sessions we think you’ll like. And although our track focuses on generative AI, many other tracks have related sessions. Use the “Generative AI” tag as you are browsing the session catalog to find them.
- Build a contextual chatbot for financial services using Amazon SageMaker JumpStart, Llama 2 and Amazon OpenSearch Serverless with Vector Engineby Sunil Padmanabhan (AWS Machine Learning Blog) on November 22, 2023 at 2:45 pm
The financial service (FinServ) industry has unique generative AI requirements related to domain-specific data, data security, regulatory controls, and industry compliance standards. In addition, customers are looking for choices to select the most performant and cost-effective machine learning (ML) model and the ability to perform necessary customization (fine-tuning) to fit their business use cases. Amazon
- Build well-architected IDP solutions with a custom lens – Part 1: Operational excellenceby Brijesh Pati (AWS Machine Learning Blog) on November 22, 2023 at 2:41 pm
The IDP Well-Architected Lens is intended for all AWS customers who use AWS to run intelligent document processing (IDP) solutions and are searching for guidance on how to build secure, efficient, and reliable IDP solutions on AWS. Building a production-ready solution in the cloud involves a series of trade-offs between resources, time, customer expectation, and
- Build well-architected IDP solutions with a custom lens – Part 2: Securityby Sherry Ding (AWS Machine Learning Blog) on November 22, 2023 at 2:41 pm
Building a production-ready solution in AWS involves a series of trade-offs between resources, time, customer expectation, and business outcome. The AWS Well-Architected Framework helps you understand the benefits and risks of decisions you make while building workloads on AWS. By using the Framework, you will learn current operational and architectural recommendations for designing and operating
- Build well-architected IDP solutions with a custom lens – Part 3: Reliabilityby Rui Cardoso (AWS Machine Learning Blog) on November 22, 2023 at 2:41 pm
The IDP Well-Architected Custom Lens is intended for all AWS customers who use AWS to run intelligent document processing (IDP) solutions and are searching for guidance on how to build a secure, efficient, and reliable IDP solution on AWS. Building a production-ready solution in the cloud involves a series of trade-offs between resources, time, customer
- Build well-architected IDP solutions with a custom lens – Part 4: Performance efficiencyby Mia Chang (AWS Machine Learning Blog) on November 22, 2023 at 2:40 pm
When a customer has a production-ready intelligent document processing (IDP) workload, we often receive requests for a Well-Architected review. To build an enterprise solution, developer resources, cost, time and user-experience have to be balanced to achieve the desired business outcome. The AWS Well-Architected Framework provides a systematic way for organizations to learn operational and architectural
- [D] Simple Questions Threadby /u/AutoModerator (Machine Learning) on November 19, 2023 at 4:00 pm
Please post your questions here instead of creating a new thread. Encourage others who create new posts for questions to post here instead! Thread will stay alive until next one so keep posting after the date in the title. Thanks to everyone for answering questions in the previous thread! submitted by /u/AutoModerator [link] [comments]
Download AWS machine Learning Specialty Exam Prep App on iOs

Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon
A Twitter List by enoumenDownload AWS machine Learning Specialty Exam Prep App on iOs
Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon