AWS Machine Learning Certification Specialty Exam Prep

AWS Machine Learning Specialty Certification Prep (Android)

You can translate the content of this page by selecting a language in the select box.

The AWS Certified Machine Learning Specialty validates expertise in building, training, tuning, and deploying machine learning (ML) models on AWS.

Use this App to learn about Machine Learning on AWS and prepare for the AWS Machine Learning Specialty Certification MLS-C01.

Download AWS machine Learning Specialty Exam Prep App on iOs

Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon

AWS MLS-C01 Machine Learning Specialty Exam Prep PRO

[appbox appstore 1611045854-iphone screenshots]

[appbox microsoftstore  9n8rl80hvm4t-mobile screenshots]

AWS machine learning certification prep
AWS machine learning certification prep

Download AWS machine Learning Specialty Exam Prep App on iOs

Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon

The App provides hundreds of quizzes and practice exam about:

– Machine Learning Operation on AWS

– Modelling

– Data Engineering

– Computer Vision,

– Exploratory Data Analysis,

– ML implementation & Operations

– Machine Learning Basics Questions and Answers

– Machine Learning Advanced Questions and Answers

– Scorecard

– Countdown timer

– Machine Learning Cheat Sheets

– Machine Learning Interview Questions and Answers

– Machine Learning Latest News

The App covers Machine Learning Basics and Advanced topics including: NLP, Computer Vision, Python, linear regression, logistic regression, Sampling, dataset, statistical interaction, selection bias, non-Gaussian distribution, bias-variance trade-off, Normal Distribution, correlation and covariance, Point Estimates and Confidence Interval, A/B Testing, p-value, statistical power of sensitivity, over-fitting and under-fitting, regularization, Law of Large Numbers, Confounding Variables, Survivorship Bias, univariate, bivariate and multivariate, Resampling, ROC curve, TF/IDF vectorization, Cluster Sampling, etc.

Domain 1: Data Engineering

Create data repositories for machine learning.

Identify data sources (e.g., content and location, primary sources such as user data)

Determine storage mediums (e.g., DB, Data Lake, S3, EFS, EBS)

Identify and implement a data ingestion solution.

Data job styles/types (batch load, streaming)

Data ingestion pipelines (Batch-based ML workloads and streaming-based ML workloads), etc.

Domain 2: Exploratory Data Analysis

Sanitize and prepare data for modeling.

Perform feature engineering.

Analyze and visualize data for machine learning.

Domain 3: Modeling

Frame business problems as machine learning problems.

Select the appropriate model(s) for a given machine learning problem.

Train machine learning models.

Perform hyperparameter optimization.

Evaluate machine learning models.

Domain 4: Machine Learning Implementation and Operations

Build machine learning solutions for performance, availability, scalability, resiliency, and fault

tolerance.

Recommend and implement the appropriate machine learning services and features for a given

problem.

Apply basic AWS security practices to machine learning solutions.

Deploy and operationalize machine learning solutions.

Machine Learning Services covered:

Amazon Comprehend

AWS Deep Learning AMIs (DLAMI)

AWS DeepLens

Amazon Forecast

Amazon Fraud Detector

Amazon Lex

Amazon Polly

Amazon Rekognition

Amazon SageMaker

Amazon Textract

Amazon Transcribe

Amazon Translate

Other Services and topics covered are:

Ingestion/Collection

Processing/ETL

Data analysis/visualization

Model training

Model deployment/inference

Operational

AWS ML application services

Language relevant to ML (for example, Python, Java, Scala, R, SQL)

Notebooks and integrated development environments (IDEs),

S3, SageMaker, Kinesis, Lake Formation, Athena, Kibana, Redshift, Textract, EMR, Glue, SageMaker, CSV, JSON, IMG, parquet or databases, Amazon Athena

Amazon EC2, Amazon Elastic Container Registry (Amazon ECR), Amazon Elastic Container Service, Amazon Elastic Kubernetes Service , Amazon Redshift

Important: To succeed with the real exam, do not memorize the answers in this app. It is very important that you understand why a question is right or wrong and the concepts behind it by carefully reading the reference documents in the answers.

Note and disclaimer: We are not affiliated with Microsoft or Azure or Google or Amazon. The questions are put together based on the certification study guide and materials available online. The questions in this app should help you pass the exam but it is not guaranteed. We are not responsible for any exam you did not pass.

Download AWS machine Learning Specialty Exam Prep App on iOs

Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon

  • [R] Protein language models expose viral mimicry and immune escape
    by /u/ddofer (Machine Learning) on July 16, 2024 at 9:13 am

    We got accepted at ICML 24/ML4LMS workshop, so I thought i'd share 🙂 "Protein Language Models Expose Viral Mimicry and Immune Escape" TL;DR: 🧬 Research Overview: Viruses mimic host proteins to escape detection by the immune system. We used Protein Language Models (PLMs) to differentiate viral proteins from human ones, with 99.7% accuracy. 📊 Insights: Our research shows that the PLMs and the biological immune system make similar errors. By identifying and analyzing these errors, we gain valuable insights into immunoreactivity and potential avenues for developing more effective vaccines and treatments. We also show a novel, explainable, multimodal tabular error analysis approach for understanding insights and mistakes made on any problem, letting us understand what characterizes the mistakes made by Deep learning Language models/PLMs . 🔗 Paper : https://openreview.net/forum?id=gGnJBLssbb&noteId=gGnJBLssbb Code: https://github.com/ddofer/ProteinHumVir Meet me and the poster (#116) at the ICML/ML4LMS workshop!: https://openreview.net/attachment?id=gGnJBLssbb&name=poster doi: https://doi.org/10.1101/2024.03.14.585057 submitted by /u/ddofer [link] [comments]

  • [D] What happened to "creative" decoding strategy?
    by /u/zyl1024 (Machine Learning) on July 15, 2024 at 6:35 pm

    For GPT-2 and most models at that time, the naive greedy decoding is extremely prone to generating repetitive and nonsensical outputs very fast, and many techniques, such as top-p sampling, nucleus sampling, repetition penalty, n-gram penalty, etc. are needed. (e.g. https://arxiv.org/pdf/1904.09751 ) For recent LLMs, I haven't been using any of these tricks, and instead, any temperature between 0 and 1 seems to work just fine. The only repetitive generation that I've observed seem to be in math reasoning, when the model wants to do some exhaustive search that didn't succeed. So are all these custom decoding strategies a thing of the past, and we don't need to worry about degenerate content generation anymore? submitted by /u/zyl1024 [link] [comments]

  • [R] Any-Property-Conditional Molecule Generation with Self-Criticism using Spanning Trees
    by /u/AlexiaJM (Machine Learning) on July 15, 2024 at 5:47 pm

    submitted by /u/AlexiaJM [link] [comments]

  • [D] Ideas on how to improve time series forecasting with unknown data
    by /u/Ok_Bottle2306 (Machine Learning) on July 15, 2024 at 5:20 pm

    Hi all, My company decided to add an analytical suite as part of our offering and I was tasked with creating a prediction solution. My problem starts with the fact that I do not know what data I will be getting. It can be monthly financial aggregations like revenue, and it can be daily sales data. I currently use implementwtions of ETS, SARIMAX, Holt-Winters and N-beats (just in case). I do automatic hyperparameter tuning with an expanding window, and then just pick the model with the best MAPE. As for preprocessing, I remove outliers which are not seasonal and use Savitzky-Golay filters before feeding the data to the models. Any suggestions regarding how to make this less of a Hail Mary? submitted by /u/Ok_Bottle2306 [link] [comments]

  • Video auto-dubbing using Amazon Translate, Amazon Bedrock, and Amazon Polly
    by Na Yu (AWS Machine Learning Blog) on July 15, 2024 at 5:00 pm

    This post is co-written with MagellanTV and Mission Cloud.  Video dubbing, or content localization, is the process of replacing the original spoken language in a video with another language while synchronizing audio and video. Video dubbing has emerged as a key tool in breaking down linguistic barriers, enhancing viewer engagement, and expanding market reach. However,

  • How Mixbook used generative AI to offer personalized photo book experiences
    by Vlad Lebedev (AWS Machine Learning Blog) on July 15, 2024 at 4:49 pm

    Years ago, Mixbook undertook a strategic initiative to transition their operational workloads to Amazon Web Services (AWS), a move that has continually yielded significant advantages. This pivotal decision has been instrumental in propelling them towards fulfilling their mission, ensuring their system operations are characterized by reliability, superior performance, and operational efficiency. In this post we show you how Mixbook used generative artificial intelligence (AI) capabilities in AWS to personalize their photo book experiences—a step towards their mission.

  • [N] Yoshua Bengio's latest letter addressing arguments against taking AI safety seriously
    by /u/qtangs (Machine Learning) on July 15, 2024 at 2:00 pm

    https://yoshuabengio.org/2024/07/09/reasoning-through-arguments-against-taking-ai-safety-seriously/ Summary by GPT-4o: "Reasoning through arguments against taking AI safety seriously" by Yoshua Bengio: Summary Introduction Bengio reflects on his year of advocating for AI safety, learning through debates, and synthesizing global expert views in the International Scientific Report on AI safety. He revisits arguments against AI safety concerns and shares his evolved perspective on the potential catastrophic risks of AGI and ASI. Headings and Summary The Importance of AI Safety Despite differing views, there is a consensus on the need to address risks associated with AGI and ASI. The main concern is the unknown moral and behavioral control over such entities. Arguments Dismissing AGI/ASI Risks Skeptics argue AGI/ASI is either impossible or too far in the future to worry about now. Bengio refutes this, stating we cannot be certain about the timeline and need to prepare regulatory frameworks proactively. For those who think AGI and ASI are impossible or far in the future He challenges the idea that current AI capabilities are far from human-level intelligence, citing historical underestimations of AI advancements. The trend of AI capabilities suggests we might reach AGI/ASI sooner than expected. For those who think AGI is possible but only in many decades Regulatory and safety measures need time to develop, necessitating action now despite uncertainties about AGI’s timeline. For those who think that we may reach AGI but not ASI Bengio argues that even AGI presents significant risks and could quickly lead to ASI, making it crucial to address these dangers. For those who think that AGI and ASI will be kind to us He counters the optimism that AGI/ASI will align with human goals, emphasizing the need for robust control mechanisms to prevent AI from pursuing harmful objectives. For those who think that corporations will only design well-behaving AIs and existing laws are sufficient Profit motives often conflict with safety, and existing laws may not adequately address AI-specific risks and loopholes. For those who think that we should accelerate AI capabilities research and not delay benefits of AGI Bengio warns against prioritizing short-term benefits over long-term risks, advocating for a balanced approach that includes safety research. For those concerned that talking about catastrophic risks will hurt efforts to mitigate short-term human-rights issues with AI Addressing both short-term and long-term AI risks can be complementary, and ignoring catastrophic risks would be irresponsible given their potential impact. For those concerned with the US-China cold war AI development should consider global risks and seek collaborative safety research to prevent catastrophic mistakes that transcend national borders. For those who think that international treaties will not work While challenging, international treaties on AI safety are essential and feasible, especially with mechanisms like hardware-enabled governance. For those who think the genie is out of the bottle and we should just let go and avoid regulation Despite AI's unstoppable progress, regulation and safety measures are still critical to steer AI development towards positive outcomes. For those who think that open-source AGI code and weights are the solution Open-sourcing AI has benefits but also significant risks, requiring careful consideration and governance to prevent misuse and loss of control. For those who think worrying about AGI is falling for Pascal’s wager Bengio argues that AI risks are substantial and non-negligible, warranting serious attention and proactive mitigation efforts. Conclusion Bengio emphasizes the need for a collective, cautious approach to AI development, balancing the pursuit of benefits with rigorous safety measures to prevent catastrophic outcomes. submitted by /u/qtangs [link] [comments]

  • [D] NER Model for Hindi language
    by /u/expiredUserAddress (Machine Learning) on July 15, 2024 at 12:19 pm

    I am build a project on NER. I was able to find good models for English language. Is there any GOOD model for Hindi and other indian languages. I tried IndicNLP, BERT, etc, but the output is not good. Can someone help me in finding a good model? submitted by /u/expiredUserAddress [link] [comments]

  • [D] Best open source LLM for graph based questions answering
    by /u/Raise_Fickle (Machine Learning) on July 15, 2024 at 11:19 am

    So there are 2 parts to this questions: LLM for creating knowledge graph from unstructured text Answering questions based based on the knowledge graph Does anyone have experience with either of these, in terms of open source/finetune LLM? submitted by /u/Raise_Fickle [link] [comments]

  • [D] Best "Retrieval Augmented Generation" Orchestrator in your opinion?
    by /u/PsychologicalAd7535 (Machine Learning) on July 15, 2024 at 7:52 am

    So I'm currently developing a project that includes Gen AI with vertex ai for gemini 1.5 flash and I'm planning to add a RAG system for it and i plan on using MongoDB for the vector db to keep things simple. Now I trying to decide which orchestrator I should use for the RAG system to speed up development. What do y'all suggest? submitted by /u/PsychologicalAd7535 [link] [comments]

  • [R] interacting with GUI
    by /u/eKKiM__ (Machine Learning) on July 14, 2024 at 11:38 pm

    Today i was impressed with a demo i stumbled upon. The Adept ACT-1 model can interact with the a GUI and execute tasks based on text prompts ( https://www.adept.ai/blog/act-1 ) Is any public available model able to perform something similar? submitted by /u/eKKiM__ [link] [comments]

  • [D] Ideas on how to create a hierarchical LLM workflow?
    by /u/moonbunR (Machine Learning) on July 14, 2024 at 11:09 pm

    Is it possible to create an AI agent workflow where LLM A can speak to LLM B back-and-forth e.g. 10 times to iterate on a specific case and give back a response only after those 10 passes? For instance, if I have a strictly prompted low temperature LLM 1 and a less strict more creative LLM2. LLM2 criticizes LLM1 and gives suggestions on how to improve (iterate). You can specify the number of "repeats" e.g. Run LLM 2 a maximum of 6 times or 10 or anything I set. Idea is to create a solution where you can put a "supervisor" over "the worker or workers" and build that into an LLM hierarchical model. I’m trying to use SmythOS to create a quick proof of concept and I can’t get the data to be communicated back and forth between even two LLMs. For my specific need I must build a multi level hierarchy with a set amount of repeats. To present it well: LLM1 - creates outline for article LLM2 - writes the article and gives LLM3 for review LLM3 - critically reviews it according to his checklist. If everything is good, the final response is produced, if not, LLM 3 points out things for improvement and hands it to LLM 4 for critical insights(LLM 4 is a pre-prompted LLM that will give very specific insights or focus on a specific feeling/information being delivered in a specific way), and then it hands the whole script back to LLM2, being the only LLM that is actually writing. The process repeats until LLM3 is satisfied or until it exceeds X repetitions (where X you set ahead of time). How would you go about this? submitted by /u/moonbunR [link] [comments]

  • [P] Onnx processing slowing down with time
    by /u/SPRODEM (Machine Learning) on July 14, 2024 at 7:02 pm

    I am working on creating an onnx desktop application using PyQt. The application uses onnx model in the backend. Onnruntime-gpu is used for processing using the GPU. I am working on super-resolving videos using single image super-resolution model frame by frame. The model starts well. However, with time the processing goes on slowing down. I guess there is some garbage collection going wrong but I am not sure. Looking for tips on how to keep the model running smoothly. I want to run it for hours. submitted by /u/SPRODEM [link] [comments]

  • [D] Detecting and Identifying Seen and Unseen Ads Using YOLO and Visual Transformers - Feasibility?
    by /u/ThickDoctor007 (Machine Learning) on July 14, 2024 at 5:58 pm

    I’m working on a project where I need to detect both seen and previously unseen ads in images. Here’s the setup: • I have annotations for around 1600 images. • YOLO performs well for the annotated ads. • For the annotated ads, I also have a list of source images (the annotated ones are in the wild). My goal is to detect previously unseen ads as well. Here’s my current approach: 1. YOLO Model: I’ve trained a generalized YOLO model where all ads are labeled as a single class, “ad.” 2. Comparison Using Embeddings: Once an ad is detected, the plan is to compare it with the ads from the database using embeddings. 3. Fine-tuning a Visual Transformer: I’ll fine-tune a visual transformer on the source images to aid in distinguishing among ads with slight variations (e.g., added text like “30% off” or changed fonts). My questions are: • From your experience, is this approach promising for detecting and identifying both seen and unseen ads? • Are there better methodologies or techniques I should consider for this problem? Thanks in advance for your insights! submitted by /u/ThickDoctor007 [link] [comments]

  • [D] [R] In VQGAN, after quantization, how is an image generated?
    by /u/ShlomiRex (Machine Learning) on July 14, 2024 at 5:31 pm

    I learned that we first train the VQ-VAE (which includes encoder, quantization, decoder, discriminator) module and only then we train the transformer. But if thats true, how an image is being generated, and being processed in the discriminator, if the transformer module is not yet trained? Lets say both the VQ-VAE and transformer modules are trained, how then the image generation works? I read that the transformer only predicts the next patch (sliding window technique). So it predicts the actual pixels? Or it predicts the code vector? But I read that the code vectors are used as conditioning for the transformer. I just don't understand the interaction between the quantisized vectors (z_q), what happens after this? If the transformer predicts the next codebook vector, then why we have the codebook vectors as sequence as input to the transformer? submitted by /u/ShlomiRex [link] [comments]

  • [R] What is Flash Attention? Explained
    by /u/mehul_gupta1997 (Machine Learning) on July 14, 2024 at 4:32 pm

    A major advancement over the standard Attention mechanism (used in Attention is all you need) , Flash Attention improves it on space and time complexity. Check out to know more : https://youtu.be/znhk2mgplWY?si=Q3fz5GuMuyyWSdhd submitted by /u/mehul_gupta1997 [link] [comments]

  • [R] Graph Vision: A python library to create segment mappings.
    by /u/Kian5658 (Machine Learning) on July 14, 2024 at 3:42 pm

    submitted by /u/Kian5658 [link] [comments]

  • [D] Simple Questions Thread
    by /u/AutoModerator (Machine Learning) on July 14, 2024 at 3:00 pm

    Please post your questions here instead of creating a new thread. Encourage others who create new posts for questions to post here instead! Thread will stay alive until next one so keep posting after the date in the title. Thanks to everyone for answering questions in the previous thread! submitted by /u/AutoModerator [link] [comments]

  • [D] Relying solely on LLM APIs vs. using various different models.
    by /u/Seankala (Machine Learning) on July 14, 2024 at 2:27 pm

    I'm not sure how to exactly word this, so please excuse the rather vague title. My company's currently trying to create a chatbot that's specifically used in e-commerce. The problem that we're facing is that the LLM seems to struggle in differentiating among queries for certain function calls. I'm thinking that it would be better to let LLMs do what they do best, which in my opinion is text generation, and using other models to take care of the other stuff. For example, if we need to successfully classify customer queries into classes like "request_a," "request_b," etc. it makes more sense to me to make a dedicated model for that (large or not) and pass the results to the LLM to generate a user-friendly message. At the same time, however, I'm admittedly not that familiar with LLMs being used in real-world applications and am not sure if that would be an even worse approach than letting the model do everything on its own. Wondering what people's opinions on this are. Thanks in advance. submitted by /u/Seankala [link] [comments]

  • [D] Understanding the Look Ahead Mask in Transformer Attention
    by /u/KonArtist01 (Machine Learning) on July 14, 2024 at 12:21 pm

    I am trying to wrap my head around the explanation for the look ahead mask in transformers, which is lower triangular matrix masking. The common explanation is that it prevents tokens to look into the future and I see why it makes sense superficially. If you consider the matrix as a directed adjacency graph in classical data structure representation, this statement is true. But this matrix is generated by a network, thus is not restricted by such made up rules. If you would draw up a undirected graph of the image below, you will see that every node is still connected to every node. That's at least where my confusion stems from. I would say that the same argument could be made that the tokens cannot look into the past. If we say that attention matrix is bidirectional we could even say that a triangular matrix is enough to represent a full connected graph, as the full matrix would be symmetrical and thus is holding redundant information. And further begs the question how an asymmetrical attention matrix should be intepreted. Or is that the subsequent value V multiplication uses the matrix in such a way, that it must indeed be interpreted as a directed graph? Source: https://towardsdatascience.com/illustrated-guide-to-transformers-step-by-step-explanation-f74876522bc0 submitted by /u/KonArtist01 [link] [comments]

  • [P] Machine Learning Teach by Doing
    by /u/OtherRaisin3426 (Machine Learning) on July 14, 2024 at 10:15 am

    https://i.redd.it/wsbaaccbmgcd1.gif I believe that anyone can transition to machine learning, if they decide to do so. For the last 3 months, I started a project to teach machine learning and deep learning. I recorded 70 videos in machine learning and deep learning. Every day, I scripted, recorded and edited 1 video for about 6-7 hours. The result is 2 massive playlists. 1️⃣ Machine Learning Teach by Doing playlist: (a) Topics covered: Regression, Classification, Neural Networks, Convolutional Neural Networks (b) Number of lectures: 35 (c) Lecture instructor: Me (IIT Madras BTech, MIT AI PhD) (d) Playlist link: https://www.youtube.com/playlist?list=PLPTV0NXA_ZSi-nLQ4XV2Mds8Z7bihK68L 2️⃣ Neural Networks from scratch playlist: (a) Topics covered: Neural Network architecture, forward pass, backward pass, optimizers. Completely coded in Python from scratch. No Pytorch. No Tensorflow. Only Numpy. (b) Number of lectures: 35 (c) Lecture instructor: Me (IIT Madras BTech, MIT AI PhD) Playlist link: https://www.youtube.com/playlist?list=PLPTV0NXA_ZSj6tNyn_UadmUeU3Q3oR-hu P.S: Lecturer background: I graduated with a PhD in machine learning from MIT. submitted by /u/OtherRaisin3426 [link] [comments]

  • [R] Using Video Generation Models for Taxi OD Demand Matrix Prediction
    by /u/Bobsthejob (Machine Learning) on July 14, 2024 at 7:44 am

    Hello I started self-studying into AI at the start of this year - I have blogged about it too (blog can be found in my github in the link down below), and I read many papers related to the above topic. Starting from Graph Neural Networks, then moved to what can I add and thought about Mixture Density Networks, and then ended up with Next-frame prediction models applied to Origin-Destination taxi demand matrix prediction. I have never written any kind of paper, just read plenty during my studies and I think mine is some very basic, not even undergrad-level 'text'. Any feedback is appreciated, I feel like I should not even show this to my professor (I am currently doing MSc in Information Systems). I feel like I started well - reading many many papers, taking notes, doing summaries, and got a very good understanding of the topic, but when it came to the model bit I kind of rushed everything because I gave up hope on writing any kind of paper. My 'paper' is not published anywhere, its PDF is on my Github along with the code. (edit: I used overleaf for the first time so even some of the formatting is weird, please excuse me for that) Abstract: Predicting taxi demand is essential for managing urban transportation effectively. This study explores the application of next-frame prediction models—ConvLSTM and PredRNN—to forecast Origin-Destination (OD) taxi demand matrices using a concatenated dataset of NYC taxi data from early 2024. ConvLSTM achieved an RMSE of 1.27 with longer training times, while PredRNN achieved 1.59 with faster training. These models offer alternatives to traditional graph-based methods, showing strengths and trade-offs in real-world scenarios. Additionally, an open-source framework for model deployment is introduced, aiming to bridge the gap between research and practical implementation in taxi demand forecasting. Our code can be found on our Github. Keywords: Taxi, Demand, forecasting, OD Matrix, Next-Frame Prediction Models submitted by /u/Bobsthejob [link] [comments]

  • [D] How do companies like Glean or OpenAI store so much data in a vector DB for retrieval?
    by /u/dtek_01 (Machine Learning) on July 14, 2024 at 6:58 am

    I was going to Qdrants pricing and saw that if I wanted 32GB RAM with 4 VPCUs + 1TB (hypothetical space) it would cost me around $780/month. How do companies like Glean or OpenAI make so much data for Enterprises searchable? These enterprises are already paying for storage, so they aren't thinking of RAG as paying for storage; confused about how large the amount of data scales when such services are so expensive. Pretty sure even hosting your own wouldn't be cheap. Any ideas? submitted by /u/dtek_01 [link] [comments]

  • [D] Centralised (& Sharable) Data Management as Phd Student
    by /u/LeanderKu (Machine Learning) on July 13, 2024 at 7:09 pm

    Hi, so I am currently working on a project where I have both models and datasets. I have access to compute servers on my university and persistent storage, but they are all private and only accessible via VPN (which makes sense). My model-weights also live on Weights&Biases, so they are sharable, accessible etc. I now want something similar for the (MNIST-size) datasets I generate and key analysis results. Something similar like git, a central point of truth that I can share with people, I would also really like to give my github CI-runner read-access. Does anyone here have nice solution going on? I also worry about costs tbh. I thought about S3 but the costs seem too high. There are research application portals for the big cloud players though, I wonder whether a small application would have a chance? submitted by /u/LeanderKu [link] [comments]

  • [R] Understanding the Unreasonable Effectiveness of Discrete Representations In Reinforcement Learning
    by /u/ejmejm1 (Machine Learning) on July 13, 2024 at 4:20 pm

    Links Paper: https://arxiv.org/abs/2312.01203 Code: https://github.com/ejmejm/discrete-representations-for-continual-rl Video: https://youtu.be/s8RqGlU5HEs <-- Recommended if you want a quick (~13 min) look Thesis: https://era.library.ualberta.ca/items/d9bc72bd-cb8c-4ca9-a978-e97e8e16abf0 Problem Several recent papers in the model-based RL space [e.g. 1, 2, 3] have used discrete state representations - that is weird! Why use representations that are less expressive and are far more limited in informational content? That's what this paper looks at: (1) What are the benefits of using discrete states to learn world models, and (2) What are the benefits of using discrete states to learn policies? We also start just start to look at why this might be the case. Key Results 1. World models learned over discrete representations were able to more accurately represent more of the world (transitions) with less capacity when compared to those learned over continuous representations. ground-truth continuous representations discrete representations Above you can see the same policy played out in the real environment, and simulated in continuous and discrete world models. Over time, errors in the continuous world model accumulated, and the agent never reaches the goal. This is less of a problem in the discrete world model. It's important to note that both have the potential to learn perfect would models when the model is large enough, but when that is not possible (as it is generally the case in interesting and complex environments like the real world) discrete representations win out. 2. Not all "discrete representations" are created equal A discrete variable is one that can take on a number of distinct values. Prior work typically uses multi-one-hot representations that look like the green matrix here: https://preview.redd.it/wi0f2hud4bcd1.png?width=1048&format=png&auto=webp&s=65f7ac6fa8ae48978b0e9f8097c0d8852090b793 They are binary matrices that can be simplified to vectors of natural numbers (i.e. discrete vectors). Each natural number corresponds to a one-hot encoding given by one row of the matrix. Representing these discrete values with one-hot encodings, however, is a choice. What if we instead were to represent them as vectors of arbitrary continuous values? So long as we are consistent (e.g. 3 always maps to [0.2, -1.5, 0.4]), then we are representing the exact same information. We call this form of discrete representation a quantized representation (for reasons more clear in the paper). If we compare models learned over quantized and multi-one-hot representations, we see a significant gap in the model's accuracy: Lower means a more accurate world model and is better. Multi-one-hot representations are binary, quantized representations are not. Both represent the same discrete information. It turns out that the binarity and sparsity are actually really important! It is not necessarily just the fact that the representations are discrete. 3. Policies learned over discrete representations improved faster Because this post is already pretty long, I'm skipping a lot of details and experiments here (more in the paper). We pre-learned multi-one-hot and continuous representations of two MiniGrid environments, and then learned policies over them. During policy training, we changed the layout of the environment at regular intervals to see how quickly the policies could adapt to the change. The agent's goal in these environments is to quickly navigate to the goal, so lower episode length is better. When we do this, we see that the policy learned over discrete (multi-one-hot) representations consistently adapts faster. Conclusion Discrete representations in our experiments were beneficial. Learning from discrete representations led to more accurately modeling more of the world when modeling capacity was limited, and it led to faster adapting policies. However, it does not seem to be just the discreteness of "discrete representations" that makes them effective. The choice to use multi-one-hot discrete representations, and the binarity and sparsity of these representations seem to play an important role. We leave the disentanglement of these factor to future work. submitted by /u/ejmejm1 [link] [comments]

  • [D] Hiring students/graduates, good or bad idea?
    by /u/maxiedaniels (Machine Learning) on July 13, 2024 at 3:11 pm

    My startup is at a point where we'd like to start exploring some novel concepts using ML, specifically within the realm of audio. We're self funded so we have limited budget and can't afford some the ML people I find on job postings asking for $400k/yr 😳 But interestingly enough, all the ML open source projects I see that are truly interesting seem to be done by graduate students / people working on their PhD. Not by people with huge resumes working for massive companies. Is it unreasonable to try and find a passionate graduate student at a somewhat affordable hourly rate, in hopes that they could become part of the company, equity, etc? Or is that not usually a thing? submitted by /u/maxiedaniels [link] [comments]

  • [D] How "normal" is my ML Engineer job?
    by /u/Fursol (Machine Learning) on July 13, 2024 at 9:16 am

    Hello everyone, I hope this post doesn't break any rule. I have a master's degree in CS (ML-focused) and I started working at a startup as a ML Engineer about 1 month ago. The company has a very interesting ML core product (on which they research/experiment quite a bit), but it also has a small "side project" for a single client which is far less exciting. It basically consists of a simple web app doing some ML operations in the background, mainly by querying OpenAI's APIs and by using other pre-built models that aren't modified in any way. Now, the person who developed this app is leaving and I'll have to pick up the project.. and I am a little pissed about it, because this isn't really what I expected when coming here. Considering that this is my first job ever (never even did an internship before), how "normal" would you consider working on such a product? Is my position actually just prompt-engineering in disguise? submitted by /u/Fursol [link] [comments]

  • [P] I was struggle how Stable Diffusion works, so I decided to write my own from scratch with math explanation 🤖
    by /u/jurassimo (Machine Learning) on July 12, 2024 at 11:38 pm

    submitted by /u/jurassimo [link] [comments]

  • Using Agents for Amazon Bedrock to interactively generate infrastructure as code
    by Akhil Raj Yallamelli (AWS Machine Learning Blog) on July 11, 2024 at 4:25 pm

    In this blog post, we explore how Agents for Amazon Bedrock can be used to generate customized, organization standards-compliant IaC scripts directly from uploaded architecture diagrams. This will help accelerate deployments, reduce errors, and ensure adherence to security guidelines.

  • Improve RAG accuracy with fine-tuned embedding models on Amazon SageMaker
    by Ennio Pastore (AWS Machine Learning Blog) on July 11, 2024 at 4:09 pm

    This post demonstrates how to use Amazon SageMaker to fine tune a Sentence Transformer embedding model and deploy it with an Amazon SageMaker Endpoint. The code from this post and more examples are available in the GitHub repo.

  • How BRIA AI used distributed training in Amazon SageMaker to train latent diffusion foundation models for commercial use
    by Doron Bleiberg (AWS Machine Learning Blog) on July 11, 2024 at 3:52 pm

    This post is co-written with Bar Fingerman from BRIA AI. This post explains how BRIA AI trained BRIA AI 2.0, a high-resolution (1024×1024) text-to-image diffusion model, on a dataset comprising petabytes of licensed images quickly and economically. Amazon SageMaker training jobs and Amazon SageMaker distributed training libraries took on the undifferentiated heavy lifting associated with infrastructure

  • Create custom images for geospatial analysis with Amazon SageMaker Distribution in Amazon SageMaker Studio
    by Janosch Woschitz (AWS Machine Learning Blog) on July 11, 2024 at 3:42 pm

    This post shows you how to extend Amazon SageMaker Distribution with additional dependencies to create a custom container image tailored for geospatial analysis. Although the example in this post focuses on geospatial data science, the methodology presented can be applied to any kind of custom image based on SageMaker Distribution.

  • Automating model customization in Amazon Bedrock with AWS Step Functions workflow
    by Biswanath Mukherjee (AWS Machine Learning Blog) on July 11, 2024 at 3:27 pm

    Large language models have become indispensable in generating intelligent and nuanced responses across a wide variety of business use cases. However, enterprises often have unique data and use cases that require customizing large language models beyond their out-of-the-box capabilities. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs)

  • Knowledge Bases for Amazon Bedrock now supports advanced parsing, chunking, and query reformulation giving greater control of accuracy in RAG based applications
    by Sandeep Singh (AWS Machine Learning Blog) on July 11, 2024 at 12:49 am

    Knowledge Bases for Amazon Bedrock is a fully managed service that helps you implement the entire Retrieval Augmented Generation (RAG) workflow from ingestion to retrieval and prompt augmentation without having to build custom integrations to data sources and manage data flows, pushing the boundaries for what you can do in your RAG workflows. However, it’s

  • Streamline generative AI development in Amazon Bedrock with Prompt Management and Prompt Flows (preview)
    by Antonio Rodriguez (AWS Machine Learning Blog) on July 10, 2024 at 4:45 pm

    Today, we’re excited to introduce two powerful new features for Amazon Bedrock: Prompt Management and Prompt Flows, in public preview. These features are designed to accelerate the development, testing, and deployment of generative artificial intelligence (AI) applications, enabling developers and business users to create more efficient and effective solutions that are easier to maintain. You

  • Empowering everyone with GenAI to rapidly build, customize, and deploy apps securely: Highlights from the AWS New York Summit
    by Swami Sivasubramanian (AWS Machine Learning Blog) on July 10, 2024 at 4:38 pm

    Imagine this—all employees relying on generative artificial intelligence (AI) to get their work done faster, every task becoming less mundane and more innovative, and every application providing a more useful, personal, and engaging experience. To realize this future, organizations need more than a single, powerful large language model (LLM) or chat assistant. They need a

  • A progress update on our commitment to safe, responsible generative AI
    by Vasi Philomin (AWS Machine Learning Blog) on July 10, 2024 at 4:37 pm

    Responsible AI is a longstanding commitment at Amazon. From the outset, we have prioritized responsible AI innovation by embedding safety, fairness, robustness, security, and privacy into our development processes and educating our employees. We strive to make our customers’ lives better while also establishing and implementing the necessary safeguards to help protect them. Our practical

  • Fine-tune Anthropic’s Claude 3 Haiku in Amazon Bedrock to boost model accuracy and quality
    by Yanyan Zhang (AWS Machine Learning Blog) on July 10, 2024 at 3:30 pm

    Frontier large language models (LLMs) like Anthropic Claude on Amazon Bedrock are trained on vast amounts of data, allowing Anthropic Claude to understand and generate human-like text. Fine-tuning Anthropic Claude 3 Haiku on proprietary datasets can provide optimal performance on specific domains or tasks. The fine-tuning as a deep level of customization represents a key

  • Achieve up to ~2x higher throughput while reducing costs by up to ~50% for generative AI inference on Amazon SageMaker with the new inference optimization toolkit – Part 2
    by James Wu (AWS Machine Learning Blog) on July 9, 2024 at 9:59 pm

    As generative artificial intelligence (AI) inference becomes increasingly critical for businesses, customers are seeking ways to scale their generative AI operations or integrate generative AI models into existing workflows. Model optimization has emerged as a crucial step, allowing organizations to balance cost-effectiveness and responsiveness, improving productivity. However, price-performance requirements vary widely across use cases. For

  • Achieve up to ~2x higher throughput while reducing costs by ~50% for generative AI inference on Amazon SageMaker with the new inference optimization toolkit – Part 1
    by Raghu Ramesha (AWS Machine Learning Blog) on July 9, 2024 at 9:59 pm

    Today, Amazon SageMaker announced a new inference optimization toolkit that helps you reduce the time it takes to optimize generative artificial intelligence (AI) models from months to hours, to achieve best-in-class performance for your use case. With this new capability, you can choose from a menu of optimization techniques, apply them to your generative AI

  • Anthropic Claude 3.5 Sonnet ranks number 1 for business and finance in S&P AI Benchmarks by Kensho
    by Qingwei Li (AWS Machine Learning Blog) on July 9, 2024 at 8:09 pm

    Anthropic Claude 3.5 Sonnet currently ranks at the top of S&P AI Benchmarks by Kensho, which assesses large language models (LLMs) for finance and business. Kensho is the AI Innovation Hub for S&P Global. Using Amazon Bedrock, Kensho was able to quickly run Anthropic Claude 3.5 Sonnet through a challenging suite of business and financial

  • The Weather Company enhances MLOps with Amazon SageMaker, AWS CloudFormation, and Amazon CloudWatch
    by Qaish Kanchwala (AWS Machine Learning Blog) on July 8, 2024 at 7:12 pm

    In this post, we share the story of how The Weather Company (TWCo) enhanced its MLOps platform using services such as Amazon SageMaker, AWS CloudFormation, and Amazon CloudWatch. TWCo data scientists and ML engineers took advantage of automation, detailed experiment tracking, integrated training, and deployment pipelines to help scale MLOps effectively. TWCo reduced infrastructure management time by 90% while also reducing model deployment time by 20%.

  • Eviden scales AWS DeepRacer Global League using AWS DeepRacer Event Manager
    by Sathya Paduchuri (AWS Machine Learning Blog) on July 8, 2024 at 6:59 pm

    Eviden is a next-gen technology leader in data-driven, trusted, and sustainable digital transformation. With a strong portfolio of patented technologies and worldwide leading positions in advanced computing, security, AI, cloud, and digital platforms, Eviden provides deep expertise for a multitude of industries in more than 47 countries. Eviden is an AWS Premier partner, bringing together

  • Generate unique images by fine-tuning Stable Diffusion XL with Amazon SageMaker
    by Alen Zograbyan (AWS Machine Learning Blog) on July 8, 2024 at 6:47 pm

    Stable Diffusion XL by Stability AI is a high-quality text-to-image deep learning model that allows you to generate professional-looking images in various styles. Managed versions of Stable Diffusion XL are already available to you on Amazon SageMaker JumpStart (see Use Stable Diffusion XL with Amazon SageMaker JumpStart in Amazon SageMaker Studio) and Amazon Bedrock (see

  • Build your multilingual personal calendar assistant with Amazon Bedrock and AWS Step Functions
    by Feng Lu (AWS Machine Learning Blog) on July 3, 2024 at 4:57 pm

    This post shows you how to apply AWS services such as Amazon Bedrock, AWS Step Functions, and Amazon Simple Email Service (Amazon SES) to build a fully-automated multilingual calendar artificial intelligence (AI) assistant. It understands the incoming messages, translates them to the preferred language, and automatically sets up calendar reminders.

  • Medical content creation in the age of generative AI
    by Sarah Boufelja (AWS Machine Learning Blog) on July 3, 2024 at 4:50 pm

    Generative AI and transformer-based large language models (LLMs) have been in the top headlines recently. These models demonstrate impressive performance in question answering, text summarization, code, and text generation. Today, LLMs are being used in real settings by companies, including the heavily-regulated healthcare and life sciences industry (HCLS). The use cases can range from medical

Download AWS machine Learning Specialty Exam Prep App on iOs

AWS machine learning certification prep
AWS machine learning certification prep

Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon

Download AWS machine Learning Specialty Exam Prep App on iOs

Download AWS Machine Learning Specialty Exam Prep App on Android/Web/Amazon

Pass the 2024 AWS Cloud Practitioner CCP CLF-C01 Certification with flying colors Ace the 2024 AWS Solutions Architect Associate SAA-C03 Exam with Confidence