AWS Machine Learning Certification Specialty Exam Prep

You can translate the content of this page by selecting a language in the select box.


The AWS Certified Machine Learning Specialty validates expertise in building, training, tuning, and deploying machine learning (ML) models on AWS.

Use this App to learn about Machine Learning on AWS and prepare for the AWS Machine Learning Specialty Certification MLS-C01.

Download AWS machine Learning Specialty Exam Prep App on iOs


Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon

AWS MLS-C01 Machine Learning Specialty Exam Prep PRO
AWS machine learning certification prep
AWS machine learning certification prep

Download AWS machine Learning Specialty Exam Prep App on iOs

Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon

The App provides hundreds of quizzes and practice exam about:

– Machine Learning Operation on AWS

– Modelling

– Data Engineering

– Computer Vision,

– Exploratory Data Analysis,

– ML implementation & Operations

– Machine Learning Basics Questions and Answers

– Machine Learning Advanced Questions and Answers

– Scorecard

– Countdown timer

– Machine Learning Cheat Sheets

– Machine Learning Interview Questions and Answers

– Machine Learning Latest News

The App covers Machine Learning Basics and Advanced topics including: NLP, Computer Vision, Python, linear regression, logistic regression, Sampling, dataset, statistical interaction, selection bias, non-Gaussian distribution, bias-variance trade-off, Normal Distribution, correlation and covariance, Point Estimates and Confidence Interval, A/B Testing, p-value, statistical power of sensitivity, over-fitting and under-fitting, regularization, Law of Large Numbers, Confounding Variables, Survivorship Bias, univariate, bivariate and multivariate, Resampling, ROC curve, TF/IDF vectorization, Cluster Sampling, etc.

Domain 1: Data Engineering

Create data repositories for machine learning.

Identify data sources (e.g., content and location, primary sources such as user data)

Determine storage mediums (e.g., DB, Data Lake, S3, EFS, EBS)

Identify and implement a data ingestion solution.

Data job styles/types (batch load, streaming)

Data ingestion pipelines (Batch-based ML workloads and streaming-based ML workloads), etc.

Domain 2: Exploratory Data Analysis

Sanitize and prepare data for modeling.

Perform feature engineering.

Analyze and visualize data for machine learning.

Domain 3: Modeling

Frame business problems as machine learning problems.

Select the appropriate model(s) for a given machine learning problem.

Train machine learning models.

Perform hyperparameter optimization.

Evaluate machine learning models.

Domain 4: Machine Learning Implementation and Operations

Build machine learning solutions for performance, availability, scalability, resiliency, and fault

tolerance.

Recommend and implement the appropriate machine learning services and features for a given

problem.

Apply basic AWS security practices to machine learning solutions.

Deploy and operationalize machine learning solutions.

Machine Learning Services covered:

Amazon Comprehend

AWS Deep Learning AMIs (DLAMI)

AWS DeepLens

Amazon Forecast

Amazon Fraud Detector

Amazon Lex

Amazon Polly

Amazon Rekognition

Amazon SageMaker

Amazon Textract

Amazon Transcribe

Amazon Translate

Other Services and topics covered are:

Ingestion/Collection

Processing/ETL

Data analysis/visualization

Model training

Model deployment/inference

Operational

AWS ML application services

Language relevant to ML (for example, Python, Java, Scala, R, SQL)

Notebooks and integrated development environments (IDEs),

S3, SageMaker, Kinesis, Lake Formation, Athena, Kibana, Redshift, Textract, EMR, Glue, SageMaker, CSV, JSON, IMG, parquet or databases, Amazon Athena

Amazon EC2, Amazon Elastic Container Registry (Amazon ECR), Amazon Elastic Container Service, Amazon Elastic Kubernetes Service , Amazon Redshift

Important: To succeed with the real exam, do not memorize the answers in this app. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Note and disclaimer: We are not affiliated with Microsoft or Azure or Google or Amazon. The questions are put together based on the certification study guide and materials available online. The questions in this app should help you pass the exam but it is not guaranteed. We are not responsible for any exam you did not pass.


Download AWS machine Learning Specialty Exam Prep App on iOs

Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon

  • [D] Precision/Recall in Keras
    by /u/danst83 (Machine Learning) on August 9, 2022 at 1:39 pm

    I'm tracking precision and recall metrics in a binary classifier built in keras. model.compile(optimizer="rmsprop", loss="binary_crossentropy", metrics=[keras.metrics.Precision(), keras.metrics.Recall()]) Epoch 1/4 13778/13778 [==============================] - 101s 7ms/step - loss: 0.1141 - precision_1: 0.8769 - recall_1: 0.4209 How does the model know what threshold to pick when it's reporting precision and recall while training? Or is it simply setting it to 0.5. submitted by /u/danst83 [link] [comments]

  • [N][R][CfP] All Things Attention- Bridging Different Perspectives on Attention Workshop @ NeurIPS 22
    by /u/NicePresentation4 (Machine Learning) on August 9, 2022 at 1:38 pm

    Hi all -- I'm Abhijat, one of the co-organizers for a workshop bringing together people from Machine Learning, CogSci, Neuroscience, Psychology, and Human-Computer Interaction for a workshop that we hope will help us to start reaching common ground about how we think about attention in these fields! We invite you to submit papers (up to 9 pages for long papers and up to 5 pages for short papers) and/or attend the workshop to bring your own perspective to this discussion! Details below: Workshop website: https://attention-learning-workshop.github.io/ Submission deadline: Sep 15, 2022 The All Things Attention workshop aims to foster connections across disparate academic communities that conceptualize "Attention" such as Neuroscience, Psychology, Machine Learning, and Human Computer Interaction. Our speakers and panelists represent a diverse population from these very related but often disparate fields! Workshop topics of interest include (but are not limited to): Relationships between biological and artificial attention Attention for reinforcement learning and decision making Benefits and formulation of attention mechanisms for continual / lifelong learning Attention as a tool for interpretation and explanation The role of attention in human-computer interaction and human-robot interaction Attention mechanisms in Deep Neural Network (DNN) architectures This workshop will be in-person and we hope to make the talks available online. The panel discussions may not be available online. Happy to take any questions below! submitted by /u/NicePresentation4 [link] [comments]

  • [D] Reading Group: Content-Based Image Retrieval
    by /u/JClub (Machine Learning) on August 9, 2022 at 12:35 pm

    ​ https://preview.redd.it/bity3opumog91.png?width=1918&format=png&auto=webp&s=222e96927e2a373efa91360f7b3afb0e1b646e29 More info at https://outsystems-ai-reading-group.github.io/ submitted by /u/JClub [link] [comments]

  • [P] I made a GitHub extension which recommends similar repos [Open Source]
    by /u/th3luck (Machine Learning) on August 9, 2022 at 12:34 pm

    I've always struggled to discover interesting repositories on GitHub. Also, when I was searching for some open-source tools, I had to open multiple links and tabs to look at similar repos. That's when I decided that GitHub lacks recommendations on the repository page. Just like in any social network, when you open a post you will see a bunch of recommended posts or videos to increase your engagement. I wrote a full article about the ML part of the project https://indexstorm.com/git-rec You can download the extension, or access the source code on our github https://github.com/indexStorm I would be very happy to hear any feedback, or if you will upvote the extension on producthunt ​ Example submitted by /u/th3luck [link] [comments]

  • "[R]" What is a good research topic focused on disease prediction?
    by /u/Intelligent-Oil241 (Machine Learning) on August 9, 2022 at 11:57 am

    So basically the objective of my thesis is to make a prediction model for a specific disease based on a number of symptoms that I will collect from a hospital of my choosing (most likely from lab tests). The thing is I'm obviously looking for disease that's not too well known like heart disease,kidney stone,covid etc. In short I'm looking for a disease that's not too well known but at the same time possible to collect the necessary data from it submitted by /u/Intelligent-Oil241 [link] [comments]

  • [D] Is migrating full or partial resources from one environment to the other is possible??
    by /u/MyraEaty1128 (Machine Learning) on August 9, 2022 at 11:46 am

    If there is a situation where a data expert is working on all his data and builds his predictive models on one environment. Now, to extend his future predictions or in order to generate intelligent Apps using another environment (server/Platform/cloud) , how could he get all his past resources onto the new environment instantly???? submitted by /u/MyraEaty1128 [link] [comments]

  • [D] Informed semantic image segmentation?
    by /u/Boring-Violinist8291 (Machine Learning) on August 9, 2022 at 4:18 am

    I have a map of pixels in an image, classified as either A or B. I have a variety of variables attributed to each of those pixels. What sorts of models follow a semantic image segmentation approach that would allow me to predict pixels as A or B using the variables attributed? Preferably a more recent model (e.g. beyond U-net) submitted by /u/Boring-Violinist8291 [link] [comments]

  • [R] Can machines learn how to behave?
    by /u/baylearn (Machine Learning) on August 9, 2022 at 3:27 am

    Interesting blog post by Blaise Aguera y Arcas, a VP who leads Google’s AI group in Seattle. Can machines learn how to behave? Beyond the current news cycle about whether AIs are sentient is a more practical and immediately consequential conversation about AI value alignment: whether and how AIs can be imbued with human values. Today, this turns on the even more fundamental question of whether the newest generation of language models can or can’t understand concepts — and on what it means to understand.¹ If, as some researchers contend, language models are mere “babblers” that randomly regurgitate their training data — “garbage in, garbage out” — then real AI value alignment is, at least for now, out of reach. Seemingly, the best we can do is to carefully curate training inputs to filter out “garbage”, often referred to as “toxic content”, even as we seek to broaden data sources to better represent human diversity. There are some profound challenges implied here, including governance (who gets to define what is “toxic”?), labor (is it humane to employ people to do “toxic content” filtering?²), and scale (how can we realistically build large models under such constraints?). This skeptical view also suggests a dubious payoff for the whole language model research program, since the practical value of a mere “babbler” is unclear: what meaningful tasks could a model with no understanding of concepts be entrusted to do? If the answer is none, then why bother with them at all? Rest of the blog: https://medium.com/@blaisea/can-machines-learn-how-to-behave-42a02a57fadb submitted by /u/baylearn [link] [comments]

  • [D] Does Nerf use CPU a lot?
    by /u/No_Fig_3372 (Machine Learning) on August 9, 2022 at 12:04 am

    I’m about to use 12700k, 3090ti for studying nerf. But I’m worrying about cpu cooler. Will Noctua nh-d15 be enough? I thought Nerf wouldn’t use cpu that much. submitted by /u/No_Fig_3372 [link] [comments]

  • Sketch to Photo Real [D]
    by /u/davelargent (Machine Learning) on August 8, 2022 at 6:39 pm

    I've seen a number of style-transfer GANs and tools to take photography and video and created sketch, anime, and any number of looks to them. I've also seen StyleGan (more specifically StyleClip) take animated or sketched faces and turn them into photo real versions of the input. I am curious if there is something akin to StyleClip that can take a sketch or animated version of an image and convert that to photo reality. Say I had a frame from a the He-Man cartoon of the 1980's and it was a frame of an environment with a castle (just an example of an input), and I was wanted to visualize that as a photo real result. While possible to use images like this for init images in Dall-e or Disco etc. They also seem to require prompts to match it, and the results are never that 1:1 with a photoreal interpretation of the animated scene. Canvas from Nvidia does something similar, but not with a real image input as far as I can tell. So, that is what I am after is to see if there are any projects or techniques people have come across to bring a sketch into photoreality. (could also be an animated frame, CG render etc). Thanks in Advance! submitted by /u/davelargent [link] [comments]

  • [D] What features would you like to see in a vector search database?
    by /u/Low-Yogurtcloset-812 (Machine Learning) on August 8, 2022 at 6:39 pm

    Vector similarity search, as you know, is a foundational component of many AI applications such as semantic search, video/image reverse search, fraud detection and question answering systems. We are still in the early stages NNext [Read "Next"] an open-source, vector search database and are just curious what kind of features you'd like to see. For context, deploying an ANN library such as ScaNN, FAISS and ANNOY is a trivial matter. What really counts is the ability for a ANN index to have core database features such as. Metadata filtering and search Index elasticity. Ability to perform CRUD operations, particularly ADD, DELETE and UPDATE. Horizontal scalability. Better feature extractors - as you may be aware, CNNs are not ideal feature extractors because they dull the signal. Cluster mode. What other features would you like to see?We are aware of other open-source vector similarity databases such as Milvus, Weaviate, QDrant as well as proprietary ones like Google's vertex matching and Pinecone.io. These databases excel at what they intend to do and have been gaining widespread adoption. We are looking address the needs that these other solutions can't/have't meet for whatever reason in order to be complimentary to them. ​ Code Link: https://github.com/nnextdb/nnext submitted by /u/Low-Yogurtcloset-812 [link] [comments]

  • (Certified!!) Adversarial Robustness for Free!
    by /u/89237849237498237427 (Machine Learning) on August 8, 2022 at 6:30 pm

    submitted by /u/89237849237498237427 [link] [comments]

  • [D] Polygon Annotation Tool
    by /u/Apprehensive-Wheel18 (Machine Learning) on August 8, 2022 at 4:52 pm

    Hello everyone, I have text detection model which takes a polygon annotations of an image not a rectangle, and I want to annotate some images can you folks provide some suggestions for a good and open source tool for polygon annotations. submitted by /u/Apprehensive-Wheel18 [link] [comments]

  • MLOps at the edge with Amazon SageMaker Edge Manager and AWS IoT Greengrass
    by Bruno Pistone (AWS Machine Learning Blog) on August 8, 2022 at 4:48 pm

    Internet of Things (IoT) has enabled customers in multiple industries, such as manufacturing, automotive, and energy, to monitor and control real-world environments. By deploying a variety of edge IoT devices such as cameras, thermostats, and sensors, you can collect data, send it to the cloud, and build machine learning (ML) models to predict anomalies, failures,

  • [R] Few-shot Learning with Retrieval Augmented Language Model ( Atlas ) - Meta AI 2022 - Outperforming a 540B parameter model by 3% despite having 50x fewer parameters!
    by /u/Singularian2501 (Machine Learning) on August 8, 2022 at 3:59 pm

    Paper: https://arxiv.org/abs/2208.03299 Abstract: Large language models have shown impressive few-shot results on a wide range of tasks. However, when knowledge is key for such results, as is the case for tasks such as question answering and fact checking, massive parameter counts to store knowledge seem to be needed. Retrieval augmented models are known to excel at knowledge intensive tasks without the need for as many parameters, but it is unclear whether they work in few-shot settings. In this work we present Atlas, a carefully designed and pre-trained retrieval augmented language model able to learn knowledge intensive tasks with very few training examples. We perform evaluations on a wide range of tasks, including MMLU, KILT and NaturalQuestions, and study the impact of the content of the document index, showing that it can easily be updated. Notably, Atlas reaches over 42\% accuracy on Natural Questions using only 64 examples, outperforming a 540B parameters model by 3% despite having 50x fewer parameters. https://preview.redd.it/69lr6v4thig91.jpg?width=1401&format=pjpg&auto=webp&s=dd748e8ddc8c5c4c90d3e7cc8f7e012cd258ff7f https://preview.redd.it/tl6nyw4thig91.jpg?width=917&format=pjpg&auto=webp&s=4570a3b050268aee37f4fa19c1c0e190c164c1f6 https://preview.redd.it/ygy9z54thig91.jpg?width=909&format=pjpg&auto=webp&s=824c3ebf30dce50a3fab68c750de661a1adbdfe4 submitted by /u/Singularian2501 [link] [comments]

  • [P] Mask R-CNN (matterport) does not generate masks or just generates them randomly
    by /u/Greckon121 (Machine Learning) on August 8, 2022 at 3:21 pm

    Hello everyone, I'm working on a project detecting two different types of olive branches. I'm following this code (based on matterports Mask R-CNN) with my own dataset: https://github.com/AarohiSingla/Mask-RCNN-on-Custom-Dataset-2classes- . I had to made small changes in code but still nothing that should interfere with generating masks. I have around 300 training images (I annotated them using VIA and extracted to json file )that I trained for 15 epochs using 10 steps per epoch and I trained only heads layer, min detection is 80% and learning rate is 0.001. After training I got my weights files in .h5 format after each epoch. I'm using my latest weight file (file I got after 15 epoch) for testing my images. The problem I have is that mostly my images only show detection (which is mostly correct although it could be better) and not segmentation. Something like this: ​ https://preview.redd.it/5u6i5a87aig91.png?width=318&format=png&auto=webp&s=c49c8a34580db8698649fe030c4e3139e03aaaf5 Or it shows with mask but the mask is completely random: ​ https://preview.redd.it/0snovyccaig91.png?width=550&format=png&auto=webp&s=082ae6fd7665413619198bd186e32d9afe9bb242 I read that it could bethe problem with scipy version (https://github.com/matterport/Mask_RCNN/issues/2122) so I downgraded it, I also tried to modify shift = np.array([0, 0, 1., 1.]) in utils.py but nothing helped. If anyone have suggestions what could be the problem with generating masks I would appriciate help. Also If anyone has suggestion on where to start (what hyperparameters to change first) so I could train my model better it would help me a lot. submitted by /u/Greckon121 [link] [comments]

  • [D] Kubeflow Update & Demonstration/Q&A
    by /u/AmicusRecruitment (Machine Learning) on August 8, 2022 at 1:43 pm

    Kubeflow requires an advanced team with vision and perseverance, and so does solving the world’s hardest problems. This Kubeflow update will cover: What is Kubeflow and why market leaders use Kubeflow User feedback from Kubeflow User Survey An update on Kubeflow 1.6 Kubeflow use case demo - Build a pipeline from a jupyter notebook How to get involved with Kubeflow. With over 7,000 slack members, Kubeflow is the open source machine learning platform that delivers Kubernetes native operations. Kubeflow integrates software components for model development, training, visualization and tuning, along with pipeline deployments, and model serving. It supports popular frameworks i.e. tensorflow, keras, pytorch, xgboost, mxnet, scikit learn and provides kubernetes operating efficiencies. In this workshop, Josh Bottum will review why market leaders are using Kubeflow and important feedback received in the Kubeflow User Survey. He will also review the Kubeflow release process and the benefits coming in Kubeflow 1.6. Demo gods willing, Josh will also provide a quick demo of how to build a Kubeflow pipeline from a Jupyter notebook. He will finish with information on how to get involved in the Kubeflow Community. Josh Bottum has volunteered as a Kubeflow Community Product Manager since 2019. Over the last 12 releases, Josh has helped the Kubeflow project by running community meetings, triaging GitHub issues, answering slack questions, recruiting code contributors, running user surveys, developing release roadmaps and presentations, writing blog posts, and providing Kubeflow demonstrations. Please don't be put off by having to register, this is a free live coding walk-through with a Q&A with Josh 🙂 If you'd like to see a different topic showcased in the future please let us know! https://www.eventbrite.co.uk/e/python-live-kubeflow-update-and-demonstration-tickets-395193653857 submitted by /u/AmicusRecruitment [link] [comments]

  • [P] Deep Dive into NeRF (Neural Radiance Fields)
    by /u/dtransposed (Machine Learning) on August 8, 2022 at 11:47 am

    Set out to finally understand how this cool invention called NeRF (Neural Radiance Fields). In this post, I am documenting my analysis of the algorithm. I simply run the code through my debugger, analyze what is going on step-by-step, and cross-reference my understanding with the original paper. And plotting - a lot of plotting to visualize concepts; we humans are visual beasts after all. https://dtransposed.github.io/blog/2022/08/06/NeRF/ submitted by /u/dtransposed [link] [comments]

  • [D] Data Augmentation in Transformer feature space? (Master Thesis)
    by /u/friend_of_kalman (Machine Learning) on August 8, 2022 at 11:12 am

    Hey everyone, I'm currently figuring out the topic of my master thesis. And I wan't to know if my idea is stupid / feasible since I have not really worked with transformers before: Data: The data I have is 10 years of medical data from a fairly big hospitals ICU. It's about 100 biomarkers at a spacial resolution of 30 minutes per patient with an average ICU hospitalization time of 5 days. Method: I found a paper that describes how they perform data augmentation in the latent space of an encoder-decoder model. They do this by interpolating between the latent spaces of two samples, and generate a new sample with this. [DeVries, source] Now my idea. Since transformers are basically special encoder-decoders (from how I understand) they also create a latent space (or feature space). I wan't to try if the data augmentation technique used by DeVries also works in transformers and if it performs better or worse compared to the normal encoder-decoder they used. For those of you that have a better understanding of transformers then I do: Is this possible in theory? Cheers in advance and please ask any questions if I didn't explain myself properly. submitted by /u/friend_of_kalman [link] [comments]

  • [R] Multimodal Learning with Transformers: A Survey
    by /u/hardmaru (Machine Learning) on August 8, 2022 at 5:00 am

    submitted by /u/hardmaru [link] [comments]

  • [N] Machine learning talks from SciPy 2022 are up!
    by /u/verfahrensweise (Machine Learning) on August 7, 2022 at 11:56 pm

    Hey everyone! Just wanted to share that the recordings of the machine learning and data management talks from SciPy 2022 are up on YouTube. Some particular talks that might be of interest: Savin Goyal giving an intro to Metaflow for data science Kevin Kho on what's new in Prefect 1.0 Niels Bantilan on productizing machine learning workflows with flyte Seb Raschka on using regression with ordered categories Davina Zamanzadeh on why you might want to introduce missing values on purpose Paul Anzel on how to test your data Allan Campopiano talking about why the normal distribution doesn't exist Here is the full playlist for the machine learning talks: https://www.youtube.com/playlist?list=PLYx7XA2nY5GcBWLGTzhJ1vxGtHIcyHrRr And the full playlist for the data lifecycle talks: https://www.youtube.com/playlist?list=PLYx7XA2nY5Gde0WF1yswQw5InhmSNED8o submitted by /u/verfahrensweise [link] [comments]

  • [D] Is it illegal to use an image GAN's results for commercial purposes if the GAN was trained on copyrighted images?
    by /u/No_Application_5581 (Machine Learning) on August 7, 2022 at 11:17 pm

    Common sense tells me that the answer is "yes", but my confusion is as follows: At the bottom of the Latent Diffusion - LAION-400M huggingface space, it says "Who owns the images produced by this demo? Definetly not me! Probably you do." The model was trained on the LAION-400M dataset (obviously), and in its website it says "The images are under their copyright." Since the images are "under their copyright" it seems very possible to me that the model could accidentally spit out an image that is too similar to a copyrighted one from the dataset, and thus I would not "own it". I probably wouldn't even be able to use it. Much less for commercial purposes (which is what I'm interested in). It really does look like the images are "under their copyright" because on some results from that model you can almost read "iStock" at the bottom of the image. This would make it pretty dangerous to use the image like I "owned" it. What are your thoughts on this? submitted by /u/No_Application_5581 [link] [comments]

  • [P] SharinGAN: Generating Naruto Sharingans with GANs
    by /u/leonardtang (Machine Learning) on August 7, 2022 at 11:13 pm

    Perhaps the most iconic symbol from Naruto is the sharingan, the infamous eye mark of the Uchiha clan. The original sharingans are visually striking and beautifully designed by the show’s creators. Many Naruto fans have even been inspired to generate their own versions of the sharingan. After watching the show, I too was inspired to craft my own sharingan. Unfortunately, I’m not very artistic. So instead, I made the sharinGAN: a GAN to create novel sharingan artwork for me. Since our training data is composed entirely of 15 sharingans from the series, this poses a challenging and fun problem of generating high-fidelity images in the extremely low-data regime. Feel free to check it out at www.sharingans.com. Feedback is welcome! submitted by /u/leonardtang [link] [comments]

  • [D] The current and future state of AI/ML is shockingly demoralizing with little hope of redemption
    by /u/Flaky_Suit_8665 (Machine Learning) on August 7, 2022 at 9:25 pm

    I recently encountered the PaLM (Scaling Language Modeling with Pathways) paper from Google Research and it opened up a can of worms of ideas I’ve felt I’ve intuitively had for a while, but have been unable to express – and I know I can’t be the only one. Sometimes I wonder what the original pioneers of AI – Turing, Neumann, McCarthy, etc. – would think if they could see the state of AI that we’ve gotten ourselves into. 67 authors, 83 pages, 540B parameters in a model, the internals of which no one can say they comprehend with a straight face, 6144 TPUs in a commercial lab that no one has access to, on a rig that no one can afford, trained on a volume of data that a human couldn’t process in a lifetime, 1 page on ethics with the same ideas that have been rehashed over and over elsewhere with no attempt at a solution – bias, racism, malicious use, etc. – for purposes that who asked for? When I started my career as an AI/ML research engineer 2016, I was most interested in two types of tasks – 1.) those that most humans could do but that would universally be considered tedious and non-scalable. I’m talking image classification, sentiment analysis, even document summarization, etc. 2.) tasks that humans lack the capacity to perform as well as computers for various reasons – forecasting, risk analysis, game playing, and so forth. I still love my career, and I try to only work on projects in these areas, but it’s getting harder and harder. This is because, somewhere along the way, it became popular and unquestionably acceptable to push AI into domains that were originally uniquely human, those areas that sit at the top of Maslows’s hierarchy of needs in terms of self-actualization – art, music, writing, singing, programming, and so forth. These areas of endeavor have negative logarithmic ability curves – the vast majority of people cannot do them well at all, about 10% can do them decently, and 1% or less can do them extraordinarily. The little discussed problem with AI-generation is that, without extreme deterrence, we will sacrifice human achievement at the top percentile in the name of lowering the bar for a larger volume of people, until the AI ability range is the norm. This is because relative to humans, AI is cheap, fast, and infinite, to the extent that investments in human achievement will be watered down at the societal, educational, and individual level with each passing year. And unlike AI gameplay which superseded humans decades ago, we won’t be able to just disqualify the machines and continue to play as if they didn’t exist. Almost everywhere I go, even this forum, I encounter almost universal deference given to current SOTA AI generation systems like GPT-3, CODEX, DALL-E, etc., with almost no one extending their implications to its logical conclusion, which is long-term convergence to the mean, to mediocrity, in the fields they claim to address or even enhance. If you’re an artist or writer and you’re using DALL-E or GPT-3 to “enhance” your work, or if you’re a programmer saying, “GitHub Co-Pilot makes me a better programmer?”, then how could you possibly know? You’ve disrupted and bypassed your own creative process, which is thoughts -> (optionally words) -> actions -> feedback -> repeat, and instead seeded your canvas with ideas from a machine, the provenance of which you can’t understand, nor can the machine reliably explain. And the more you do this, the more you make your creative processes dependent on said machine, until you must question whether or not you could work at the same level without it. When I was a college student, I often dabbled with weed, LSD, and mushrooms, and for a while, I thought the ideas I was having while under the influence were revolutionary and groundbreaking – that is until took it upon myself to actually start writing down those ideas and then reviewing them while sober, when I realized they weren’t that special at all. What I eventually determined is that, under the influence, it was impossible for me to accurately evaluate the drug-induced ideas I was having because the influencing agent the generates the ideas themselves was disrupting the same frame of reference that is responsible evaluating said ideas. This is the same principle of – if you took a pill and it made you stupider, would even know it? I believe that, especially over the long-term timeframe that crosses generations, there’s significant risk that current AI-generation developments produces a similar effect on humanity, and we mostly won’t even realize it has happened, much like a frog in boiling water. If you have children like I do, how can you be aware of the the current SOTA in these areas, project that 20 to 30 years, and then and tell them with a straight face that it is worth them pursuing their talent in art, writing, or music? How can you be honest and still say that widespread implementation of auto-correction hasn’t made you and others worse and worse at spelling over the years (a task that even I believe most would agree is tedious and worth automating). Furthermore, I’ve yet to set anyone discuss the train – generate – train - generate feedback loop that long-term application of AI-generation systems imply. The first generations of these models were trained on wide swaths of web data generated by humans, but if these systems are permitted to continually spit out content without restriction or verification, especially to the extent that it reduces or eliminates development and investment in human talent over the long term, then what happens to the 4th or 5th generation of models? Eventually we encounter this situation where the AI is being trained almost exclusively on AI-generated content, and therefore with each generation, it settles more and more into the mean and mediocrity with no way out using current methods. By the time that happens, what will we have lost in terms of the creative capacity of people, and will we be able to get it back? By relentlessly pursuing this direction so enthusiastically, I’m convinced that we as AI/ML developers, companies, and nations are past the point of no return, and it mostly comes down the investments in time and money that we’ve made, as well as a prisoner’s dilemma with our competitors. As a society though, this direction we’ve chosen for short-term gains will almost certainly make humanity worse off, mostly for those who are powerless to do anything about it – our children, our grandchildren, and generations to come. If you’re an AI researcher or a data scientist like myself, how do you turn things back for yourself when you’ve spent years on years building your career in this direction? You’re likely making near or north of $200k annually TC and have a family to support, and so it’s too late, no matter how you feel about the direction the field has gone. If you’re a company, how do you standby and let your competitors aggressively push their AutoML solutions into more and more markets without putting out your own? Moreover, if you’re a manager or thought leader in this field like Jeff Dean how do you justify to your own boss and your shareholders your team’s billions of dollars in AI investment while simultaneously balancing ethical concerns? You can’t – the only answer is bigger and bigger models, more and more applications, more and more data, and more and more automation, and then automating that even further. If you’re a country like the US, how do responsibly develop AI while your competitors like China single-mindedly push full steam ahead without an iota of ethical concern to replace you in numerous areas in global power dynamics? Once again, failing to compete would be pre-emptively admitting defeat. Even assuming that none of what I’ve described here happens to such an extent, how are so few people not taking this seriously and discounting this possibility? If everything I’m saying is fear-mongering and non-sense, then I’d be interested in hearing what you think human-AI co-existence looks like in 20 to 30 years and why it isn’t as demoralizing as I’ve made it out to be. ​ EDIT: Day after posting this -- this post took off way more than I expected. Even if I received 20 - 25 comments, I would have considered that a success, but this went much further. Thank you to each one of you that has read this post, even more so if you left a comment, and triply so for those who gave awards! I've read almost every comment that has come in (even the troll ones), and am truly grateful for each one, including those in sharp disagreement. I've learned much more from this discussion with the sub than I could have imagined on this topic, from so many perspectives. While I will try to reply as many comments as I can, the sheer comment volume combined with limited free time between work and family unfortunately means that there are many that I likely won't be able to get to. That will invariably include some that I would love respond to under the assumption of infinite time, but I will do my best, even if the latency stretches into days. Thank you all once again! submitted by /u/Flaky_Suit_8665 [link] [comments]

  • [R] PITI: Pretraining is All You Need for Image-to-Image Translation + Gradio Web Demo
    by /u/Illustrious_Row_9971 (Machine Learning) on August 7, 2022 at 8:12 pm

    submitted by /u/Illustrious_Row_9971 [link] [comments]

  • [D] Interview question: "What classifier should you use as the meta-classifier in your stacking model and why?"
    by /u/bandalorian (Machine Learning) on August 7, 2022 at 2:08 pm

    Saw this datascience interview question posted. ​ Let’s say you work at Google. You are developing a spam classifier to classify emails into spam vs. non-spam categories based on their content. You try several different classifiers like SVM, Random Forests, etc., but none of them produce satisfactory results. So, you decide to combine them together by using stacking. What classifier should you use as the meta-classifier in your stacking model and why? ​ So from my understanding, a meta classifier is essentially a model that takes as input feature the output from other models, and then provides a final prediction based on that. But what arguments are there for using a specific classifier as the top classifier? submitted by /u/bandalorian [link] [comments]

  • Optimal pricing for maximum profit using Amazon SageMaker
    by Viktor Enrico Jeney (AWS Machine Learning Blog) on August 4, 2022 at 3:53 pm

    This is a guest post by Viktor Enrico Jeney, Senior Machine Learning Engineer at Adspert. Adspert is a Berlin-based ISV that developed a bid management tool designed to automatically optimize performance marketing and advertising campaigns. The company’s core principle is to automate maximization of profit of ecommerce advertising with the help of artificial intelligence. The

  • Amazon Comprehend announces lower annotation limits for custom entity recognition
    by Luca Guida (AWS Machine Learning Blog) on August 3, 2022 at 8:03 pm

    Amazon Comprehend is a natural-language processing (NLP) service you can use to automatically extract entities, key phrases, language, sentiments, and other insights from documents. For example, you can immediately start detecting entities such as people, places, commercial items, dates, and quantities via the Amazon Comprehend console, AWS Command Line Interface, or Amazon Comprehend APIs. In

  • Promote feature discovery and reuse across your organization using Amazon SageMaker Feature Store and its feature-level metadata capability
    by Arnaud Lauer (AWS Machine Learning Blog) on August 3, 2022 at 5:51 pm

    Amazon SageMaker Feature Store helps data scientists and machine learning (ML) engineers securely store, discover, and share curated data used in training and prediction workflows. Feature Store is a centralized store for features and associated metadata, allowing features to be easily discovered and reused by data scientist teams working on different projects or ML models.

  • Scale YOLOv5 inference with Amazon SageMaker endpoints and AWS Lambda
    by Kevin Song (AWS Machine Learning Blog) on August 2, 2022 at 9:11 pm

    After data scientists carefully come up with a satisfying machine learning (ML) model, the model must be deployed to be easily accessible for inference by other members of the organization. However, deploying models at scale with optimized cost and compute efficiencies can be a daunting and cumbersome task. Amazon SageMaker endpoints provide an easily scalable

  • Simplify iterative machine learning model development by adding features to existing feature groups in Amazon SageMaker Feature Store
    by Chaitra Mathur (AWS Machine Learning Blog) on August 1, 2022 at 5:58 pm

    Feature engineering is one of the most challenging aspects of the machine learning (ML) lifecycle and a phase where the most amount of time is spent—data scientists and ML engineers spend 60–70% of their time on feature engineering. AWS introduced Amazon SageMaker Feature Store during AWS re:Invent 2020, which is a purpose-built, fully managed, centralized

  • [D] Simple Questions Thread
    by /u/AutoModerator (Machine Learning) on July 31, 2022 at 3:00 pm

    Please post your questions here instead of creating a new thread. Encourage others who create new posts for questions to post here instead! Thread will stay alive until next one so keep posting after the date in the title. Thanks to everyone for answering questions in the previous thread! submitted by /u/AutoModerator [link] [comments]

  • Add conversational AI to any contact center with Amazon Lex and the Amazon Chime SDK
    by Prem Ranga (AWS Machine Learning Blog) on July 29, 2022 at 10:11 pm

    Customer satisfaction is a potent metric that directly influences the profitability of an organization. With rapid technological advances in the past decade or so, it’s even more important to elevate customer focus in the following ways: Making your organization accessible to your customers across multiple modalities, including voice, text, social media, and more Providing your

  • Identify the location of anomalies using Amazon Lookout for Vision at the edge without using a GPU
    by Manish Talreja (AWS Machine Learning Blog) on July 29, 2022 at 8:13 pm

    Automated defect detection using computer vision helps improve quality and lower the cost of inspection. Defect detection involves identifying the presence of a defect, classifying types of defects, and identifying where the defects are located. Many manufacturing processes require detection at a low latency, with limited compute resources, and with limited connectivity. Amazon Lookout for

  • Fine-tune and deploy a summarizer model using the Hugging Face Amazon SageMaker containers bringing your own script
    by Viktor Malesevic (AWS Machine Learning Blog) on July 29, 2022 at 6:47 pm

    There have been many recent advancements in the NLP domain. Pre-trained models and fully managed NLP services have democratised access and adoption of NLP. Amazon Comprehend is a fully managed service that can perform NLP tasks like custom entity recognition, topic modelling, sentiment analysis and more to extract insights from data without the need of any prior

  • Team and user management with Amazon SageMaker and AWS SSO
    by Yevgeniy Ilyin (AWS Machine Learning Blog) on July 29, 2022 at 6:34 pm

    Amazon SageMaker Studio is a web-based integrated development environment (IDE) for machine learning (ML) that lets you build, train, debug, deploy, and monitor your ML models. Each onboarded user in Studio has their own dedicated set of resources, such as compute instances, a home directory on an Amazon Elastic File System (Amazon EFS) volume, and

  • Build and train ML models using a data mesh architecture on AWS: Part 2
    by Karim Hammouda (AWS Machine Learning Blog) on July 29, 2022 at 6:29 pm

    This is the second part of a series that showcases the machine learning (ML) lifecycle with a data mesh design pattern for a large enterprise with multiple lines of business (LOBs) and a Center of Excellence (CoE) for analytics and ML. In part 1, we addressed the data steward persona and showcased a data mesh

  • Build and train ML models using a data mesh architecture on AWS: Part 1
    by Karim Hammouda (AWS Machine Learning Blog) on July 29, 2022 at 6:28 pm

    Organizations across various industries are using artificial intelligence (AI) and machine learning (ML) to solve business challenges specific to their industry. For example, in the financial services industry, you can use AI and ML to solve challenges around fraud detection, credit risk prediction, direct marketing, and many others. Large enterprises sometimes set up a center

  • Integrate Amazon SageMaker Data Wrangler with MLOps workflows
    by Rodrigo Alarcon (AWS Machine Learning Blog) on July 27, 2022 at 6:00 pm

    As enterprises move from running ad hoc machine learning (ML) models to using AI/ML to transform their business at scale, the adoption of ML Operations (MLOps) becomes inevitable. As shown in the following figure, the ML lifecycle begins with framing a business problem as an ML use case followed by a series of phases, including

  • Tiny cars and big talent show Canadian policymakers the power of machine learning
    by Nicole Foster (AWS Machine Learning Blog) on July 26, 2022 at 8:15 pm

    In the end, it came down to 213 thousandths of a second! That was the difference between the two best times in the finale of the first AWS AWS DeepRacer Student Wildcard event hosted in Ottawa, Canada this May. I watched in awe as 13 students competed in a live wildcard race for the AWS

  • Predict shipment ETA with no-code machine learning using Amazon SageMaker Canvas
    by Rajakumar Sampathkumar (AWS Machine Learning Blog) on July 26, 2022 at 8:10 pm

    Logistics and transportation companies track ETA (estimated time of arrival), which is a key metric for their business. Their downstream supply chain activities are planned based on this metric. However, delays often occur, and the ETA might differ from the product’s or shipment’s actual time of arrival (ATA), for instance due to shipping distance or

  • Developing advanced machine learning systems at Trumid with the Deep Graph Library for Knowledge Embedding
    by Marc van Oudheusden (AWS Machine Learning Blog) on July 25, 2022 at 6:56 pm

    This is a guest post co-written with Mutisya Ndunda from Trumid. Like many industries, the corporate bond market doesn’t lend itself to a one-size-fits-all approach. It’s vast, liquidity is fragmented, and institutional clients demand solutions tailored to their specific needs. Advances in AI and machine learning (ML) can be employed to improve the customer experience,

  • Organize your machine learning journey with Amazon SageMaker Experiments and Amazon SageMaker Pipelines
    by Paolo Di Francesco (AWS Machine Learning Blog) on July 21, 2022 at 5:05 pm

    The process of building a machine learning (ML) model is iterative until you find the candidate model that is performing well and is ready to be deployed. As data scientists iterate through that process, they need a reliable method to easily track experiments to understand how each model version was built and how it performed.

  • Build taxonomy-based contextual targeting using AWS Media Intelligence and Hugging Face BERT
    by Aramide Kehinde (AWS Machine Learning Blog) on July 20, 2022 at 4:29 pm

    As new data privacy regulations like GDPR (General Data Protection Regulation, 2017) have come into effect, customers are under increased pressure to monetize media assets while abiding by the new rules. Monetizing media while respecting privacy regulations requires the ability to automatically extract granular metadata from assets like text, images, video, and audio files at

  • Image Augmentation with Keras Preprocessing Layers and tf.image
    by Adrian Tam (Blog) on July 20, 2022 at 2:10 am

    When we work on a machine learning problem related to images, not only we need to collect some images as training data, but also need to employ augmentation to create variations in the image. It is especially true for more complex object recognition problems. There are many ways for image augmentation. You may use some The post Image Augmentation with Keras Preprocessing Layers and tf.image appeared first on Machine Learning Mastery.

  • Localize content into multiple languages using AWS machine learning services
    by Reagan Rosario (AWS Machine Learning Blog) on July 19, 2022 at 3:29 pm

    Over the last few years, online education platforms have seen an increase in adoption of and an uptick in demand for video-based learnings because it offers an effective medium to engage learners. To expand to international markets and address a culturally and linguistically diverse population, businesses are also looking at diversifying their learning offerings by

  • Identify rooftop solar panels from satellite imagery using Amazon Rekognition Custom Labels
    by Melanie Li (AWS Machine Learning Blog) on July 19, 2022 at 3:24 pm

    Renewable resources like sunlight provide a sustainable and carbon neutral mechanism to generate power. Governments in many countries are providing incentives and subsidies to households to install solar panels as part of small-scale renewable energy schemes. This has created a huge demand for solar panels. Reaching out to potential customers at the right time, through

  • Image Augmentation for Deep Learning with Keras
    by Jason Brownlee (Blog) on July 16, 2022 at 7:00 pm

    Data preparation is required when working with neural network and deep learning models. Increasingly data augmentation is also required on more complex object recognition tasks. In this post you will discover how to use data preparation and data augmentation with your image datasets when developing and evaluating deep learning models in Python with Keras. After The post Image Augmentation for Deep Learning with Keras appeared first on Machine Learning Mastery.

  • Loss Functions in TensorFlow
    by Zhe Ming Chng (Blog) on July 15, 2022 at 2:52 am

    Loss metric is very important for neural networks. As all machine learning model is a optimization problem or another, the loss is the objective function to minimize. In neural networks, the optimization is done with gradient descent and backpropagation. But what are loss functions and how are they affecting our neural networks? In this post, The post Loss Functions in TensorFlow appeared first on Machine Learning Mastery.

  • High-Fidelity Synthetic Data for Data Engineers and Data Scientists Ali