AWS Machine Learning Certification Specialty Exam Prep

AWS Machine Learning Specialty Certification Prep (Android)

The AWS Certified Machine Learning Specialty validates expertise in building, training, tuning, and deploying machine learning (ML) models on AWS.

Use this App to learn about Machine Learning on AWS and prepare for the AWS Machine Learning Specialty Certification MLS-C01.

Download AWS machine Learning Specialty Exam Prep App on iOs

Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon

AWS MLS-C01 Machine Learning Specialty Exam Prep PRO
‎AWS Machine Learning Prep PRO

‎AWS Machine Learning Prep PRO
AWS machine learning certification prep
AWS machine learning certification prep

Download AWS machine Learning Specialty Exam Prep App on iOs

Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon

The App provides hundreds of quizzes and practice exam about:

– Machine Learning Operation on AWS

– Modelling

– Data Engineering

– Computer Vision,

– Exploratory Data Analysis,

– ML implementation & Operations

– Machine Learning Basics Questions and Answers

– Machine Learning Advanced Questions and Answers

– Scorecard

– Countdown timer

– Machine Learning Cheat Sheets

– Machine Learning Interview Questions and Answers

– Machine Learning Latest News

The App covers Machine Learning Basics and Advanced topics including: NLP, Computer Vision, Python, linear regression, logistic regression, Sampling, dataset, statistical interaction, selection bias, non-Gaussian distribution, bias-variance trade-off, Normal Distribution, correlation and covariance, Point Estimates and Confidence Interval, A/B Testing, p-value, statistical power of sensitivity, over-fitting and under-fitting, regularization, Law of Large Numbers, Confounding Variables, Survivorship Bias, univariate, bivariate and multivariate, Resampling, ROC curve, TF/IDF vectorization, Cluster Sampling, etc.

Domain 1: Data Engineering

Create data repositories for machine learning.

Identify data sources (e.g., content and location, primary sources such as user data)

Determine storage mediums (e.g., DB, Data Lake, S3, EFS, EBS)

Identify and implement a data ingestion solution.

Data job styles/types (batch load, streaming)

Data ingestion pipelines (Batch-based ML workloads and streaming-based ML workloads), etc.

Domain 2: Exploratory Data Analysis

Sanitize and prepare data for modeling.

Perform feature engineering.

Analyze and visualize data for machine learning.

Domain 3: Modeling

Frame business problems as machine learning problems.

Select the appropriate model(s) for a given machine learning problem.

Train machine learning models.

Perform hyperparameter optimization.

Evaluate machine learning models.

Domain 4: Machine Learning Implementation and Operations

Build machine learning solutions for performance, availability, scalability, resiliency, and fault


Recommend and implement the appropriate machine learning services and features for a given


Apply basic AWS security practices to machine learning solutions.

Deploy and operationalize machine learning solutions.

Machine Learning Services covered:

Amazon Comprehend

AWS Deep Learning AMIs (DLAMI)

AWS DeepLens

Amazon Forecast

Amazon Fraud Detector

Amazon Lex

Amazon Polly

Amazon Rekognition

Amazon SageMaker

Amazon Textract

Amazon Transcribe

Amazon Translate

Other Services and topics covered are:



Data analysis/visualization

Model training

Model deployment/inference


AWS ML application services

Language relevant to ML (for example, Python, Java, Scala, R, SQL)

Notebooks and integrated development environments (IDEs),

S3, SageMaker, Kinesis, Lake Formation, Athena, Kibana, Redshift, Textract, EMR, Glue, SageMaker, CSV, JSON, IMG, parquet or databases, Amazon Athena

Amazon EC2, Amazon Elastic Container Registry (Amazon ECR), Amazon Elastic Container Service, Amazon Elastic Kubernetes Service , Amazon Redshift

Important: To succeed with the real exam, do not memorize the answers in this app. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Note and disclaimer: We are not affiliated with Microsoft or Azure or Google or Amazon. The questions are put together based on the certification study guide and materials available online. The questions in this app should help you pass the exam but it is not guaranteed. We are not responsible for any exam you did not pass.

Download AWS machine Learning Specialty Exam Prep App on iOs

Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon

  • [D] How can we approach this problem ?
    by /u/corporatededmeat (Machine Learning) on May 26, 2022 at 12:34 pm

    You have a combined dataset consisting of 10 component datasets collected from 10 different sources. Independent models trained separately on each component dataset perform well on hold-out examples from that dataset. However, the aggregated model trained by combining the examples from all component datasets behaves weirdly. On hold-out examples from some component datasets, the aggregated model performs better than the independent models. On others, it performs worse than the independent models. During deployment, you expect to see input examples from these 10 component sources but also from many other sources which the model has not been trained on. What approach will you take to develop a model that will generalize well to examples from the seen and also the yet-unseen sources? submitted by /u/corporatededmeat [link] [comments]

  • [R] New datasets for StyleGAN
    by /u/RonMokady (Machine Learning) on May 26, 2022 at 12:06 pm

    Hi all, The Author is here. TL;DR: We show how StyleGAN can be adapted to raw unaligned images collected from the Internet. New datasets and models are available. ​ How can we adapt StyleGAN to more complicated datasets? We have witnessed that a data-centric approach is the most effective. Raw image collections downloaded from the internet contain many outlier images and are characterized by a multi-modal distribution. Therefore, we perform automatic self-supervised filtering of the training data to remove the outliers. Our key idea is to use the generator itself for the filtering. In the second step, we employ a multi-modal variant of the StyleGAN truncation trick. This allows high quality generation while preserving the remarkable editing capabilities of StyleGAN. For more details and cool gifs, check our Project Page: Datasets and models: The datasets also can be directly downloaded: Demo for image generation: ​ Feel free to ask anything that comes to your mind ​ Generated Dog Generated Elephant submitted by /u/RonMokady [link] [comments]

  • [R] CNNs are Myopic
    by /u/downtownslim (Machine Learning) on May 26, 2022 at 5:59 am

    submitted by /u/downtownslim [link] [comments]

  • [R] Large Language Models are Zero-Shot Reasoners. My summary: Adding text such as "Let’s think step by step" to a prompt "elicits chain of thought from large language models across a variety of reasoning tasks".
    by /u/Wiskkey (Machine Learning) on May 26, 2022 at 1:43 am

    submitted by /u/Wiskkey [link] [comments]

  • [D] Semantic Segmentation/Remote Sensing Challenges
    by /u/incognitoacnt (Machine Learning) on May 26, 2022 at 12:02 am

    Does anyone know of any interesting Semantic Segmentation and/or Remote Sensing competition taking place this summer? Most of what I found ends in the next 1-2 weeks. submitted by /u/incognitoacnt [link] [comments]

  • [P] Scale ML experiments from JupyterLab to the cloud
    by /u/chrismarrie (Machine Learning) on May 25, 2022 at 8:59 pm

    First Medium article is out! Come see how Optumi is thinking about the shifting workflow needs of data science and machine learning professionals. submitted by /u/chrismarrie [link] [comments]

  • [D] Google Imagen authors now produce images based on your prompt!
    by /u/aifordummies (Machine Learning) on May 25, 2022 at 5:00 pm

    If you are interested in getting your text converted to an image by Google Brain Imagen use the following link: submitted by /u/aifordummies [link] [comments]

  • [D] From classification to regression and some physics analogies
    by /u/crispub (Machine Learning) on May 25, 2022 at 4:46 pm

    Hello, Here is: The paper describes how to adapt a set of classification algorithms in order to perform nonlinear regression. The algorithms are described with simple numerical examples. In the "Field Sampling Density" section, the described operation is akin to estimating the strength of a field. I am interested in your opinions. Thanks. submitted by /u/crispub [link] [comments]

  • [N] Pull Requests and Discussions on Hugging Face
    by /u/unofficialmerve (Machine Learning) on May 25, 2022 at 4:42 pm

    Hey, it's Merve from Hugging Face 👋 I wanted to share some big news that I hope you find useful. The 🤗 Hub now has pull requests (PR) and discussions in repositories to improve collaboration in machine learning 🥳✨ What does PR really mean here? Let’s assume you have a big PyTorch model and someone else ported it to TensorFlow, that person can contribute that to your model repository. Someone else can open a PR to improve your model, fix your machine learning demo in a Space or change anything in the dataset. This applies for model/Space/dataset (any repo) repositories on the hub. You might say this sounds familiar to GitHub. For code, GitHub works super well and we don’t want to (and it would be very inefficient to) recreate the feature set of GitHub. What we want to focus on is creating the collaboration toolset that’s optimized for ML. You can learn more about these new features here: Looking forward to your feedback and suggestions! ✨ Hope this is useful 🙂 submitted by /u/unofficialmerve [link] [comments]

  • [D] PyTorch processes taking up tons of GPU memory - any way to reduce this?
    by /u/tmuxed (Machine Learning) on May 25, 2022 at 4:41 pm

    I am running on Arch Linux 5.17.9-arch1-1 with an NVIDIA GeForce RTX 3090 GPU. I need to run multiple processes for a reinforcement learning task, where each subprocess runs the data collection (and inference) and all the samples from that are then retrieved via queues in the main process and optimized (e.g. think of PPO but distributed, like IMPALA). I am using torch.multiprocessing for this. Unfortunately, the multiple spawned subprocesses cause A LOT of overhead in terms of GPU memory being used. See below for my nvidia-smi output: | 0 N/A N/A 559025 C ...3/envs/ml/bin/python 1873MiB | | 0 N/A N/A 559026 C ...3/envs/ml/bin/python 1873MiB | | 0 N/A N/A 559027 C ...3/envs/ml/bin/python 1873MiB | | 0 N/A N/A 559028 C ...3/envs/ml/bin/python 1873MiB | | 0 N/A N/A 559029 C ...3/envs/ml/bin/python 1873MiB | | 0 N/A N/A 559030 C ...3/envs/ml/bin/python 1873MiB | | 0 N/A N/A 559031 C ...3/envs/ml/bin/python 1873MiB | | 0 N/A N/A 559032 C ...3/envs/ml/bin/python 1873MiB | | 0 N/A N/A 559033 C ...3/envs/ml/bin/python 1873MiB | So it seems like each subprocess loads the entirety of all of PyTorch into GPU memory, which seems incredibly inefficient. Is there a way to get the subprocesses to only load this once and then share it? How can I reduce the GPU footprint for each process? EDIT: Even using a basic example from the PyTorch repo I can see the same problem: Because it's not forking, it seems to be using up tons of GPU memory for each process. Can this not be fixed? submitted by /u/tmuxed [link] [comments]

  • [P] ZenML: Build vendor-agnostic, production-ready MLOps pipelines
    by /u/htahir1 (Machine Learning) on May 25, 2022 at 4:32 pm

    Hello r/MachineLearning! Some here might remember we open-sourced ZenML, a year or so ago, and started building it out in the open. Today, we're re-launching it to the world, with a brand-new look, and a sharper focus. We've spoken to hundreds of ML teams in the last year, and here is what we've found: 🐘 Getting ML into production reliably is still hard today. It takes too long, is too complicated, and not enough people know how to do it. 🦡 MLOps platforms are not the answer because they are opinionated, rigid, and slow to change. It's time for MLOps frameworks to shine, and bring structure to the ML ecosystem that is ripe for standardization. 🐼 Well-thought-out abstractions that make sense and are flexible are what the industry needs. Our launch blog post, "The Framework Way is the Best Way", goes into further detail on this. ZenML is a framework 🖼️, not a platform 🚉 , which standardizes your MLOps pipelines. With ZenML, we enable developers of all backgrounds by providing a vendor-agnostic, open-source MLOps solution that’s easy to plug into and just works. Here's how it works: Define Pipelines and Steps Define steps and pipelines in a simple SDK At the heart of ZenML, you will find our pipelines. These pipelines provide a simple interface for you to design your ML workflows in a portable and production-ready manner. Each pipeline consists of several steps, which ingest and generate artifacts. You can design these pipelines and steps using a simple Python SDK. Configure Infrastructure Via Stacks Define infrastructure configuration in a code-agnostic way via Stacks To execute these pipelines we need to use a ZenML Stack. A stack represents the configuration of your infrastructure and consists of several stand-alone components. Three components are essential to each stack. An artifact store, where you store the input and output artifacts of your steps, a metadata store, where your executions are tracked and an orchestrator, which conducts the execution of pipeline and steps. You can also add additional components to your stacks, such as experiment trackers, secret managers, model deployers, and more. One of the most important traits about the stack is that it is completely separated from your step and pipeline code. This means if you want to move from a local setup to a remote setup all you need to do is to go to our CLI, set up a different stack, and execute the same code. Maintain Extensibility and Modularity Use the built-in ones or define your own custom flavors! The good thing is, that ZenML already comes equipped with a wide variety of implementations for these components. For example, you can use Airflow or Kubeflow to orchestrate the pipelines, Seldon or MLFlow to deploy your models, and Weights and biases to track your experiments. And this list keeps growing every day. Furthermore, you can always use the simple base abstractions to create your own integrations. For example, you can write an Argo Workflows orchestrator by overriding a simple interface and registering it via the CLI. That's it for the short introduction. To see ZenML in action, go ahead, open your terminal, and type: ​ GitHub: ZenML Website: Docs: So, what do you think? Exactly what we need in MLOps or just another tool for you? Looking forward to your feedback in the comments below! submitted by /u/htahir1 [link] [comments]

  • [P] Looking for advice how to apply clustering to learned embeddings of user-item interactions
    by /u/the_Wallie (Machine Learning) on May 25, 2022 at 3:31 pm

    hi, I'm working on a consumer segmentation job, where the goal is to understand if there are subgroups of consumers who behave in a similar way to each other, but different from the rest of the population. My dataset contains a user interacting with an item for a period of time, with a reasonable assumption that more time spent means a more favorable view of the item (so we can take the time spent as a proxy for the user's taste). So far, I've created an ML model to learn embeddings of size 64 to represent my approximately 950k users and their interactions with the ~10k items. My original plan was to apply k-means clustering to those 64-dimensional user embeddings. However, this approach isn't yielding the degree of separation I require (e.g. the top most popular items to interact with are all the same ones for each cluster). Trying different values for k, I also get a basically entirely flat elbow graph. How should I proceed from here? I've thought of 2 options: - retraining my embeddings with a smaller size and retrying with k-means - researching an alternative clustering algorithm is there anything I haven't considered yet, but should? If no, which of these 2 approaches would you explore first, and if you prefer the latter, which algorithm(s) would you test out first? Thanks for your help! submitted by /u/the_Wallie [link] [comments]

  • [D] Different input image size when using Visual Transformers
    by /u/alkibijad (Machine Learning) on May 25, 2022 at 3:14 pm

    I have an image classification problem, and have been using ResNet. The dense layers at the end are replaced with 1x1 convolutions, making the model fully convolutional. Classification is done on 128 x 128 patches, so if the input image is 128 x 128, I'll get output size 1x1. If image is 512 x 512, the output will be 4x4. Each output element will hold the prediction to which class the patch at that position belongs to. Now I'd like to try using transformer instead of ResNet. Can a similar thing be done with Vision Transformers? Are there any examples of that being done? submitted by /u/alkibijad [link] [comments]

  • [P] Second-tier Recommender System in FunCorp
    by /u/Puzzleheaded_Egg_396 (Machine Learning) on May 25, 2022 at 10:55 am

    Matrix decomposition is not perfect for improvement of recommendation systems. for example, you will find it hard to add gender and age of a user. In this article, we describe how we implemented a second, ranking level of the model above the collaborative one, and how two-stage recommendation systems help us to apply more complex algorithm submitted by /u/Puzzleheaded_Egg_396 [link] [comments]

  • [D] Does TensorFlow Lite use the DropIT method to handle intermediate tensors?
    by /u/teraRockstar (Machine Learning) on May 25, 2022 at 10:42 am

    This blog post (Optimizing TensorFlow Lite Runtime Memory) says that TensorFlow Lite employs different approaches to handle intermediate tensors which occupy large amounts of memory. Is one of them DropIT: Dropping Intermediate Tensors for Memory-Efficient DNN Training method? submitted by /u/teraRockstar [link] [comments]

  • [D] ISTM results are out - First International Symposium on the Tsetlin Machine
    by /u/olegranmo (Machine Learning) on May 25, 2022 at 10:06 am

    ​ ISTM Technical Program You find the full technical program here: submitted by /u/olegranmo [link] [comments]

  • [P] Image Background Changer : You can change background to whatever you want.
    by /u/supercornson (Machine Learning) on May 25, 2022 at 6:02 am

    This project was made using rembg package that performs image segmentation with U^2-Net. submitted by /u/supercornson [link] [comments]

  • [Discussion] Best Affordable way for doing ML online (Colab Pro, etc)
    by /u/BigNet1356 (Machine Learning) on May 25, 2022 at 4:47 am

    I have exhausted free gcp credits and I was wondering what are the best (affordable) ways to do machine learning online. Colab Pro (10 USD per month), Kaggle, Paperspace Gradient, etc come to mind. Any thoughts on which is the best? PS: I have used Colab Free version and Kaggle before - The session timeouts that lead you to re-run the notebook from the beginning is the worst experience ever. Also, the only information I could find about Gradient was from the founder, which obviously is not very reliable. Anybody else used it? submitted by /u/BigNet1356 [link] [comments]

  • [P] fastdup: tool for curating computer vision datasets at scale
    by /u/gradientflow (Machine Learning) on May 25, 2022 at 2:48 am submitted by /u/gradientflow [link] [comments]

  • [D] Google Speech to Text vs Building similar capability in house
    by /u/Mobile_Jacket_894 (Machine Learning) on May 24, 2022 at 9:53 pm

    Hey All. I hope you're well. TL;DR I'm trying to compare the costs of building vs using existing technology For those of you who have experience building speech or signal processing ML / AI? I'm looking into building a SaaS feature that require speech to text. As an MVP, we've been using Google's speech to text API which is great - with a few problems - it's cost, and accuracy. While its accuracy is quite high, its cost, if I'm calculating correctly, relatively to our pricing, would be quite high. There are also benefits in terms of building accurate models for the specific industry we'd be using it. Does anyone have any examples of what it would take to build and operate something like that (costs / number of engineers) and more importantly operating (let's say - cost per minute to in computing power / storage) submitted by /u/Mobile_Jacket_894 [link] [comments]

  • [P] Official Imagen Website by Google Brain
    by /u/margilly_ai (Machine Learning) on May 24, 2022 at 6:59 pm submitted by /u/margilly_ai [link] [comments]

  • [P] Introducing BlindAI, an Open-source, fast and privacy-friendly AI deployment solution. Benefit from state-of-the-art AI without ever revealing your data!
    by /u/Separate-Still3770 (Machine Learning) on May 24, 2022 at 4:07 pm

    Hello everyone, We are pleased to introduce BlindAI to the AI community. BlindAI is an AI deployment solution, leveraging secure enclaves, to make remotely hosted AI models privacy friendly. Please have a look at our GitHub ( to find out more! Motivation Today, most AI tools offer no privacy by design mechanisms, so when data is sent to be analysed by third parties, the data is exposed to malicious usage or potential leakage. We illustrate it below with the use of AI for voice assistants. Audio recordings are often sent to the Cloud to be analysed, leaving conversations exposed to leaks and uncontrolled usage without users’ knowledge or consent. ​ Before and after BlindAI By using BlindAI, data remains always protected as it is only decrypted inside a Trusted Execution Environment, called an enclave, whose contents are protected by hardware. While data is in clear inside the enclave, it is inaccessible to the outside thanks to isolation and memory encryption. This way, data can be processed, enriched, and analysed by AI, without exposing it to external parties. What you can do We have been able to run several state of the art models with privacy guarantees, enabling us to tackle complex scenarios, from privacy-friendly voice assistant with Wav2vec2, to confidential chest X-Ray analysis with ResNet, through document analysis with BERT. All of these models have been tested and can run with end-to-end protection under a second on an Intel(R) Xeon(R) Platinum 8370C. Model name Example use case Inference time (ms) DistilBERT Sentiment analysis 28.435 Wav2vec2 Speech to text 617.04 Facenet Facial recognition 47.135 A more detailed list of models we can deploy with privacy, with their run time, can be found here. If you like it drop a ⭐on our GitHub (! submitted by /u/Separate-Still3770 [link] [comments]

  • [D] Recent research and methods for time series forecasting
    by /u/ndalal01 (Machine Learning) on May 24, 2022 at 1:15 pm

    Recent advances in Vision and NLP are dominating the AI community at the moment. I have been trying to find out if something exciting has been done for time series forecasting recently (last five years or so). Looking for some good starting points to keep up with the latest research. Any pointers would be highly appreciated. submitted by /u/ndalal01 [link] [comments]

  • [P] What we learned by making T5-large 2X faster than Pytorch (and any autoregressive transformer)
    by /u/pommedeterresautee (Machine Learning) on May 24, 2022 at 6:49 am

    TL;DR We made autoregressive transformer based models like T5-large 2X faster than 🤗 Hugging Face Pytorch with 3 simple tricks: storing 2 computation graphs in a single Onnx file 👯: this let us have both cache and no cache support without having any duplicated weights. When cache is used, attention switch from quadratic to linear complexity (less GPU computation) and Onnx Runtime brings us kernel fusion (less memory bound ops); zero copy 💥 to retrieve output from Onnx Runtime: we leverage Cupy API to access Onnx Runtime internal CUDA arrays and expose them through Dlpack to Pytorch. It may sound a bit complex, but it let us avoid output tensors copy which limit our memory footprint and make us much faster (check notebook for other benefits of this approach); a generic tool to convert any model (whatever the architecture) to FP16: it injects random inputs in the model to detect nodes that need to be kept in FP32 because "mixed precision" is more complicated on large generative models (usual patterns don't work at large scale). notebook: (Onnx Runtime only) project: For TensorRT we have our own implementation of our approach described above which helps to provide similar latency to Onnx Runtime. It's in a dedicated Python script in the same folder as the notebook. We had to work around a documented limitation. Because of that the code is slightly more complex and we wanted to keep this notebook easy to read. text generation in 2 different setups: no cache == no long seq len The challenge We plan to use large autoregressive models like T5 mainly for few shots learning but they tend to be slow. We needed something faster (including long sequences, large models, etc.), easy to deploy (no exotic/custom framework/hardware/kernel) and generic (works on most generative transformer models, NLP related or not, compatible with Onnx Runtime and TensorRT that we are using for other stuff). In most situations, performing inference with Onnx Runtime or TensorRT usually brought large improvement over Pytorch/Hugging Face implementation. In the very specific case of autoregressive languages, things are a bit more complicated. As you know (if not, check the notebook above for a longer explanation), you can accelerate an autoregressive model by caching Key/Value representations. By using a cache, for each generated token, you are switching from a quadratic complexity to a linear one in the self/cross attention modules. Only the first generated token is done without cache. Hugging Face is using this mechanism. When you export your model to Onnx using tracing, any control flow instruction is lost (including the If instruction to enable or not a cache). All the T5 inference solutions we found seem to suffer from it (a list of existing solutions and their issues is provided in the notebook). Performance analysis and next steps With our simple approach, we have made the inference latency mostly linear to the sequence length.Profiling the GPU with Nvidia Nsight shows that GPU computation capacities are mostly unused. It likely means that we are memory bounded, it would make sense as for each step, we just perform computations for a single token. Left side, no cache, GPU is very busy, right side, GPU is waiting memory bound operations (timings are wrong because of the profiler overhead). Going deeper in the analysis, Onnx Runtime profiler confirms that we are memory bounded and spend lots of time on casting to FP16/FP32. A strategy to increase performances would be to reduce the number of casting nodes (by a second pass on the graph to remove unnecessary casting nodes). Casting nodes should be easy to reduce. Second point, MatMul (the only operation where GPU computation capacities are fully used) represent a little part of the latency because now attention is computed for only one token (excepted the first one). It means that after these transformations of the computation graph, kernel fusions to reduce the number of memory bounded operations should pay off in a much bigger way than it did in the past. Hopefully such kernel fusions will land in both TensorRT and Onnx Runtime soon. Nvidia Triton server deployment will be released when Onnx Runtime 1.12 will be supported (ORT 1.12 should be released in June, and Triton... soon after ?). ​ If you are interested in these things, you can follow me on twitter: submitted by /u/pommedeterresautee [link] [comments]

  • [P] Imagen: Latest text-to-image generation model from Google Brain!
    by /u/aifordummies (Machine Learning) on May 23, 2022 at 10:13 pm

    Imagen - unprecedented photorealism × deep level of language understanding Imagen builds on the power of large transformer language models in understanding text and hinges on the strength of diffusion models in high-fidelity image generation. Human raters prefer Imagen over other models (such as DALL-E 2) in side-by-side comparisons, both in terms of sample quality and image-text alignment. submitted by /u/aifordummies [link] [comments]

  • [D] Machine Learning - WAYR (What Are You Reading) - Week 138
    by /u/ML_WAYR_bot (Machine Learning) on May 22, 2022 at 9:49 pm

    This is a place to share machine learning research papers, journals, and articles that you're reading this week. If it relates to what you're researching, by all means elaborate and give us your insight, otherwise it could just be an interesting paper you've read. Please try to provide some insight from your understanding and please don't post things which are present in wiki. Preferably you should link the arxiv page (not the PDF, you can easily access the PDF from the summary page but not the other way around) or any other pertinent links. Previous weeks : 1-10 11-20 21-30 31-40 41-50 51-60 61-70 71-80 81-90 91-100 101-110 111-120 121-130 131-140 Week 1 Week 11 Week 21 Week 31 Week 41 Week 51 Week 61 Week 71 Week 81 Week 91 Week 101 Week 111 Week 121 Week 131 Week 2 Week 12 Week 22 Week 32 Week 42 Week 52 Week 62 Week 72 Week 82 Week 92 Week 102 Week 112 Week 122 Week 132 Week 3 Week 13 Week 23 Week 33 Week 43 Week 53 Week 63 Week 73 Week 83 Week 93 Week 103 Week 113 Week 123 Week 133 Week 4 Week 14 Week 24 Week 34 Week 44 Week 54 Week 64 Week 74 Week 84 Week 94 Week 104 Week 114 Week 124 Week 134 Week 5 Week 15 Week 25 Week 35 Week 45 Week 55 Week 65 Week 75 Week 85 Week 95 Week 105 Week 115 Week 125 Week 135 Week 6 Week 16 Week 26 Week 36 Week 46 Week 56 Week 66 Week 76 Week 86 Week 96 Week 106 Week 116 Week 126 Week 136 Week 7 Week 17 Week 27 Week 37 Week 47 Week 57 Week 67 Week 77 Week 87 Week 97 Week 107 Week 117 Week 127 Week 137 Week 8 Week 18 Week 28 Week 38 Week 48 Week 58 Week 68 Week 78 Week 88 Week 98 Week 108 Week 118 Week 128 Week 9 Week 19 Week 29 Week 39 Week 49 Week 59 Week 69 Week 79 Week 89 Week 99 Week 109 Week 119 Week 129 Week 10 Week 20 Week 30 Week 40 Week 50 Week 60 Week 70 Week 80 Week 90 Week 100 Week 110 Week 120 Week 130 Most upvoted papers two weeks ago: /u/joyful_reader: Article 1 /u/need___username: /u/CatalyzeX_code_bot: Paper link Besides that, there are no rules, have fun. submitted by /u/ML_WAYR_bot [link] [comments]

  • [D] Simple Questions Thread
    by /u/AutoModerator (Machine Learning) on May 22, 2022 at 3:00 pm

    Please post your questions here instead of creating a new thread. Encourage others who create new posts for questions to post here instead! Thread will stay alive until next one so keep posting after the date in the title. Thanks to everyone for answering questions in the previous thread! submitted by /u/AutoModerator [link] [comments]

  • Monkey Patching Python Code
    by Adrian Tam (Blog) on May 21, 2022 at 2:00 pm

    Python is a dynamic scripting language. Not only does it have a dynamic type system where a variable can be assigned to one type first and changed later, but its object model is also dynamic. This allows us to modify its behavior at run time. A consequence of this is the possibility of monkey patching. The post Monkey Patching Python Code appeared first on Machine Learning Mastery.

  • Detect social media fake news using graph machine learning with Amazon Neptune ML
    by Hasan Shojaei (AWS Machine Learning Blog) on May 19, 2022 at 4:12 pm

    In recent years, social media has become a common means for sharing and consuming news. However, the spread of misinformation and fake news on these platforms has posed a major challenge to the well-being of individuals and societies. Therefore, it is imperative that we develop robust and automated solutions for early detection of fake news

  • Optimize F1 aerodynamic geometries via Design of Experiments and machine learning
    by Pablo Hermoso Moreno (AWS Machine Learning Blog) on May 19, 2022 at 4:02 pm

    FORMULA 1 (F1) cars are the fastest regulated road-course racing vehicles in the world. Although these open-wheel automobiles are only 20–30 kilometers (or 12–18 miles) per-hour faster than top-of-the-line sports cars, they can speed around corners up to five times as fast due to the powerful aerodynamic downforce they create. Downforce is the vertical force

  • Build a risk management machine learning workflow on Amazon SageMaker with no code
    by Peter Chung (AWS Machine Learning Blog) on May 19, 2022 at 3:47 pm

    Since the global financial crisis, risk management has taken a major role in shaping decision-making for banks, including predicting loan status for potential customers. This is often a data-intensive exercise that requires machine learning (ML). However, not all organizations have the data science resources and expertise to build a risk management ML workflow. Amazon SageMaker

  • Logging in Python
    by Daniel Chung (Blog) on May 18, 2022 at 8:00 pm

    Logging is a way to store information about your script and track events that occur. When writing any complex script in Python, logging is essential for debugging software as you develop it. Without logging, finding the source of a problem in your code may be extremely time consuming. After completing this tutorial, you will know: The post Logging in Python appeared first on Machine Learning Mastery.

  • Use Amazon Lex to capture street addresses
    by Brian Yost (AWS Machine Learning Blog) on May 18, 2022 at 6:18 pm

    Amazon Lex provides automatic speech recognition (ASR) and natural language understanding (NLU) technologies to transcribe user input, identify the nature of their request, and efficiently manage conversations. Lex lets you create sophisticated conversations, streamline your user experience to improve customer satisfaction (CSAT) scores, and increase containment in your contact centers. Natural, effective customer interactions require

  • Customize pronunciation using lexicons in Amazon Polly
    by Ratan Kumar (AWS Machine Learning Blog) on May 17, 2022 at 3:36 pm

    Amazon Polly is a text-to-speech service that uses advanced deep learning technologies to synthesize natural-sounding human speech. It is used in a variety of use cases, such as contact center systems, delivering conversational user experiences with human-like voices for automated real-time status check, automated account and billing inquiries, and by news agencies like The Washington

  • Personalize your machine translation results by using fuzzy matching with Amazon Translate
    by Narcisse Zekpa (AWS Machine Learning Blog) on May 16, 2022 at 5:48 pm

    A person’s vernacular is part of the characteristics that make them unique. There are often countless different ways to express one specific idea. When a firm communicates with their customers, it’s critical that the message is delivered in a way that best represents the information they’re trying to convey. This becomes even more important when

  • Profiling Python Code
    by Adrian Tam (Blog) on May 14, 2022 at 10:00 am

    Profiling is a technique to figure out how time is spent in a program. With these statistics, we can find the “hot spot” of a program and think about ways of improvement. Sometimes, a hot spot in an unexpected location may hint at a bug in the program as well. In this tutorial, we will The post Profiling Python Code appeared first on Machine Learning Mastery.

  • Enhance the caller experience with hints in Amazon Lex
    by Kai Loreck (AWS Machine Learning Blog) on May 13, 2022 at 10:36 pm

    We understand speech input better if we have some background on the topic of conversation. Consider a customer service agent at an auto parts wholesaler helping with orders. If the agent knows that the customer is looking for tires, they’re more likely to recognize responses (for example, “Michelin”) on the phone. Agents often pick up

  • Run automatic model tuning with Amazon SageMaker JumpStart
    by Doug Mbaya (AWS Machine Learning Blog) on May 13, 2022 at 12:09 am

    In December 2020, AWS announced the general availability of Amazon SageMaker JumpStart, a capability of Amazon SageMaker that helps you quickly and easily get started with machine learning (ML). In March 2022, we also announced the support for APIs in JumpStart. JumpStart provides one-click fine-tuning and deployment of a wide variety of pre-trained models across

  • Image classification and object detection using Amazon Rekognition Custom Labels and Amazon SageMaker JumpStart
    by Pashmeen Mistry (AWS Machine Learning Blog) on May 12, 2022 at 10:07 pm

    In the last decade, computer vision use cases have been a growing trend, especially in industries like insurance, automotive, ecommerce, energy, retail, manufacturing, and others. Customers are building computer vision machine learning (ML) models to bring operational efficiencies and automation to their processes. Such models help automate the classification of images or detection of objects

  • Intelligently search your Jira projects with Amazon Kendra Jira cloud connector
    by Shreyas Subramanian (AWS Machine Learning Blog) on May 12, 2022 at 8:37 pm

    Organizations use agile project management platforms such as Atlassian Jira to enable teams to collaborate to plan, track, and ship deliverables. Jira captures organizational knowledge about the workings of the deliverables in the issues and comments logged during project implementation. However, making this knowledge easily and securely available to users is challenging due to it

  • The Intel®3D Athlete Tracking (3DAT) scalable architecture deploys pose estimation models using Amazon Kinesis Data Streams and Amazon EKS
    by Han Man (AWS Machine Learning Blog) on May 12, 2022 at 6:42 pm

    This blog post is co-written by Jonathan Lee, Nelson Leung, Paul Min, and Troy Squillaci from Intel.  In Part 1 of this post, we discussed how Intel®3DAT collaborated with AWS Machine Learning Professional Services (MLPS) to build a scalable AI SaaS application. 3DAT uses computer vision and AI to recognize, track, and analyze over 1,000

  • Moderate, classify, and process documents using Amazon Rekognition and Amazon Textract
    by Jay Rao (AWS Machine Learning Blog) on May 12, 2022 at 5:38 pm

    Many companies are overwhelmed by the abundant volume of documents they have to process, organize, and classify to serve their customers better. Examples of such can be loan applications, tax filing, and billing. Such documents are more commonly received in image formats and are mostly multi-paged and in low-quality format. To be more competitive and

  • Achieve in-vehicle comfort using personalized machine learning and Amazon SageMaker
    by Joshua Levy (AWS Machine Learning Blog) on May 11, 2022 at 4:24 pm

    This blog post is co-written by Rudra Hota and Esaias Pech from Continental AG. Many drivers have had the experience of trying to adjust temperature settings in their vehicle while attempting to keep their eyes on the road. Whether the previous driver preferred a warmer cabin temperature, or you’re now wearing warmer clothing, or the

  • Create video subtitles with Amazon Transcribe using this no-code workflow
    by Jason O'Malley (AWS Machine Learning Blog) on May 10, 2022 at 6:23 pm

    Subtitle creation on video content poses challenges no matter how big or small the organization. To address those challenges, Amazon Transcribe has a helpful feature that enables subtitle creation directly within the service. There is no machine learning (ML) or code writing required to get started. This post walks you through setting up a no-code

  • Utilize AWS AI services to automate content moderation and compliance
    by Lauren Mullennex (AWS Machine Learning Blog) on May 9, 2022 at 4:01 pm

    The daily volume of third-party and user-generated content (UGC) across industries is increasing exponentially. Startups, social media, gaming, and other industries must ensure their customers are protected, while keeping operational costs down. Businesses in the broadcasting and media industries often find it difficult to efficiently add ratings to content pieces and formats to comply with

  • Content moderation design patterns with AWS managed AI services
    by Nate Bachmeier (AWS Machine Learning Blog) on May 9, 2022 at 4:00 pm

    User-generated content (UGC) grows exponentially, as well as the requirements and the cost to keep content and online communities safe and compliant. Modern web and mobile platforms fuel businesses and drive user engagement through social features, from startups to large organizations. Online community members expect safe and inclusive experiences where they can freely consume and

  • Static Analyzers in Python
    by Adrian Tam (Blog) on May 9, 2022 at 5:09 am

    Static analyzers are tools that help you check your code without really running your code. The most basic form of static analyzers is the syntax highlighters in your favorite editors. If you need to compile your code (say, in C++), your compiler, such as LLVM, may also provide some static analyzer functions to warn you The post Static Analyzers in Python appeared first on Machine Learning Mastery.

  • Process larger and wider datasets with Amazon SageMaker Data Wrangler
    by Haider Naqvi (AWS Machine Learning Blog) on May 6, 2022 at 5:30 pm

    Amazon SageMaker Data Wrangler reduces the time to aggregate and prepare data for machine learning (ML) from weeks to minutes in Amazon SageMaker Studio. Data Wrangler can simplify your data preparation and feature engineering processes and help you with data selection, cleaning, exploration, and visualization. Data Wrangler has over 300 built-in transforms written in PySpark,

  • Fine-tune transformer language models for linguistic diversity with Hugging Face on Amazon SageMaker
    by Arnav Khare (AWS Machine Learning Blog) on May 6, 2022 at 5:22 pm

    Approximately 7,000 languages are in use today. Despite attempts in the late 19th century to invent constructed languages such as Volapük or Esperanto, there is no sign of unification. People still choose to create new languages (think about your favorite movie character who speaks Klingon, Dothraki, or Elvish). Today, natural language processing (NLP) examples are

  • Build a custom Q&A dataset using Amazon SageMaker Ground Truth to train a Hugging Face Q&A NLU model
    by Jeremy Feltracco (AWS Machine Learning Blog) on May 6, 2022 at 4:29 pm

    In recent years, natural language understanding (NLU) has increasingly found business value, fueled by model improvements as well as the scalability and cost-efficiency of cloud-based infrastructure. Specifically, the Transformer deep learning architecture, often implemented in the form of BERT models, has been highly successful, but training, fine-tuning, and optimizing these models has proven to be

  • Use custom vocabulary in Amazon Lex to enhance speech recognition
    by Kai Loreck (AWS Machine Learning Blog) on May 5, 2022 at 10:34 pm

    In our daily conversations, we come across new words or terms that we may not know. Perhaps these are related to a new domain that we’re just getting familiar with, and we pick these up as we understand more about the domain. For example, home loan terminology (“curtailment”), shortened words, (“refi”, “comps”), and acronyms (“HELOC”)

  • Setting Breakpoints and Exception Hooks in Python
    by Stefania Cristina (Blog) on May 5, 2022 at 4:21 pm

    There are different ways of debugging code in Python, one of which is to introduce breakpoints into the code at points where one would like to invoke a Python debugger. The statements used to enter a debugging session at different call sites depend on the version of the Python interpreter that one is working with, The post Setting Breakpoints and Exception Hooks in Python appeared first on Machine Learning Mastery.

  • Using Kaggle in Machine Learning Projects
    by Zhe Ming Chng (Blog) on May 2, 2022 at 2:02 pm

    You’ve probably heard of Kaggle data science competitions, but did you know that Kaggle has many other features that can help you with your next machine learning project? For people looking for datasets for their next machine learning project, Kaggle allows you to access public datasets by others and share your own datasets. For those The post Using Kaggle in Machine Learning Projects appeared first on Machine Learning Mastery.

  • Techniques to Write Better Python Code
    by Adrian Tam (Blog) on April 29, 2022 at 2:47 pm

    We write a program to solve a problem or make a tool that we can repeatedly solve a similar problem. For the latter, it is inevitable that we come back to revisit the program we wrote, or someone else is reusing the program we write. There is also a chance that we will encounter data The post Techniques to Write Better Python Code appeared first on Machine Learning Mastery.

  • Take Your Machine Learning Skills Global
    by MLM Team (Blog) on April 28, 2022 at 2:48 am

    Sponsored Post In our interconnected world, a decision made thousands of miles away can have lasting consequences for entire organizations or economies. When small changes have big effects, it is unsurprising that companies and governments are turning to machine learning and AI to accurately predict risk. ​ How the Global Community is Applying Machine Learning The post Take Your Machine Learning Skills Global appeared first on Machine Learning Mastery.

  • Google Colab for Machine Learning Projects
    by Zhe Ming Chng (Blog) on April 27, 2022 at 7:39 pm

    Have you ever wanted an easy-to-configure interactive environment to run your machine learning code that came with access to GPUs for free? Google Colab is the answer you’ve been looking for. It is a convenient and easy-to-use way to run Jupyter notebooks on the cloud, and their free version comes with some limited access to The post Google Colab for Machine Learning Projects appeared first on Machine Learning Mastery.

  • Multiprocessing in Python
    by Daniel Chung (Blog) on April 25, 2022 at 2:02 pm

    When you work on a computer vision project, you probably need to preprocess a lot of image data. This is time-consuming, and it would be great if you could process multiple images in parallel. Multiprocessing is the ability of a system to run multiple processors at one time. If you had a computer with a The post Multiprocessing in Python appeared first on Machine Learning Mastery.

Download AWS machine Learning Specialty Exam Prep App on iOs

AWS machine learning certification prep
AWS machine learning certification prep

Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon

Download AWS machine Learning Specialty Exam Prep App on iOs

Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon

2022 AWS Cloud Practitioner Exam Preparation