- The Vision, Debugged;
- Posts
- Did GitHub Copilot Just Get Schooled by a Novel AI Research?
Did GitHub Copilot Just Get Schooled by a Novel AI Research?
PLUS: Now you too can create mesmerizing infinite zoom art
Howdy fellas!
AI breakthroughs + productivity hacks = pure awesomeness!
In this edition, Spark & Trouble dive deep into some really cool AI research that can be a lifesaver for data scientists & ML engineers, along with some freakishly awesome AI tools to automate your workflows.
Plus, we have a special tutorial for all our super-creative readers…
Gif by abcnetwork on Giphy
Here’s a sneak peek into this edition 👀
Experience the future of automated machine learning with MLCopilot
Craft the perfect itinerary for your next vacation with this power prompt
A short tutorial on creating captivating Infinite Zoom Art with StableDiffusion
Time to jump in!😄
PS: Got thoughts on our content? Share 'em through a quick survey at the end of every edition It helps us see how our product labs, insights & resources are landing, so we can make them even better.
Hot off the Wires 🔥
We're eavesdropping on the smartest minds in research. 🤫 Don't miss out on what they're cooking up! In this section, we dissect some of the juiciest tech research that holds the key to what's next in tech.⚡
“Paypal leveraged ‘X’ to improve fraud detection accuracy to a whopping 95%, all while slashing model training time to under 2 hours!”
“Lenovo saw a significant boost in their sales and manufacturing operations, increasing accurate predictions from 80% to a stellar 87.5% after the integration of ‘X’”
Figured out what is this X-factor, that has led top companies to achieve such stellar results?
Think harder…
Answer: “AutoML”
It’s a process to automatically find the best machine learning model for a given task, without requiring humans to manually try out different algorithms and tweak parameters. It's like having a smart assistant that can explore various model options and configurations, and then recommend the optimal one for your specific problem.
Let's face it, building ML models from scratch can be a real hassle - from the notorious challenges of hyperparameter tuning to the laborious process of feature engineering and model optimization. Over the last few years, AutoML techniques have made significant strides to ease this process. Today, you can use these through libraries like PyCaret, AutoML solutions provided by cloud giants like Azure, AWS & GCP, as well as dedicated platforms like DataRobot & H2O.ai
However, these solutions do have their limitations:
Time-consuming trials: For large datasets, training multiple trial models can be time-consuming.
Lack of Experience Transfer: AutoML often starts from scratch for each new task, neglecting valuable experience from past projects
Domain Specificity: Transferring AutoML capabilities across different domains remains a formidable challenge
But fear not, for the researchers at Microsoft have a brilliant solution up their sleeves - the “MLCopilot” framework!
So, what’s new?
From the name, MLCopilot clearly indicates the use of an LLM that can “augment” users to help them accomplish ML-related tasks. But how is that any different from what these fascinating LLM-based coding tools (honestly…it’s almost like “magic”) like GitHub Copilot, Amazon CodeWhisperer or Google’s Duet do?
You see, these LLM-based tools have proven their mettle in various tasks - from function snippet completion & algorithm implementation to basic data exploration & coding up ML algorithms. How their performance in solving machine learning problems based solely on task descriptions has been, well, underwhelming.
When asked to build a classifier for a dataset containing images, GitHub Copilot assumed the input as a CSV (instead of a folder containing images) & proposed an SVM model (instead of a CNN or other deep learning architectures)
Oops, looks like GitHub Copilot needs a little help there!
But what if we could leverage the best of both worlds – machine intelligence and human design patterns? That's precisely the inspiration behind developing MLCopilot as a framework that harnesses the power of LLMs, along with the wealth of past experiences and knowledge to solve new ML tasks more efficiently.
Under the hood…
At the heart of MLCopilot lies an ingenious framework that seamlessly integrates LLMs with a vast repository of historical machine-learning experiences. Here's a glimpse into how it works:
Note: The LLM used here is the GPT-3.5 model (codename: text-davinci-003), without any additional fine-tuning
Offline Stage
The training data from various benchmarks are organized into “solution spaces” - think of these as categories of problems, like classification or image recognition. For the tasks, in each solution space, the following techniques are applied:
Canonicalization: Raw historical records of ML tasks (including descriptions of the problem, solution models with hyperparameter specifications, and evaluation metrics) are meticulously transformed into a standardized format, i.e. a well-formed natural language description. These canonicalized tasks are termed “experiences” & the combined set of all these experiences is called the “experience pool”
Knowledge Elicitation: Random subsets of these experiences are iteratively sampled, and an LLM is employed to summarize and analyze them, eliciting high-level knowledge acquired from these experiences – this conversion of raw experiences into organized knowledge is a true novelty, inspired by how humans understand stuff.
Validation: To ensure the utmost integrity of the generated knowledge, it undergoes rigorous validation using LLMs, safeguarding against potential hallucinations.
Offline stage in MLCopilot (source: MLCopilot paper)
The researchers have gone the extra mile by releasing the knowledge obtained from their experience, essentially creating a "cookbook" for future ML developers.
Online Stage
When presented with a new ML task during inference, MLCopilot does the following:
Retrieval: It first retrieves relevant Experiences and Knowledge.
Experiences are retrieved by comparing the similarity between the embeddings of tasks from the experience pool and the embedding of the task description.
Knowledge is retrieved based on its relevance to the solution space of the new task.
Solution Proposal: The LLM within MLCopilot then uses the retrieved Experiences and Knowledge to propose a solution tailored to the specific task at hand.
Overall MLCopilot framework (source: MLCopilot paper)
The Results
MLCopilot has been rigorously tested on a diverse array of benchmarks, encompassing numerous ML tasks and datasets, ranging from tabular data classification and regression to image classification and object detection.
The results? A resounding success, with MLCopilot outperforming major AutoML baselines, including auto-sklearn and FLAML on normalized accuracy. It even surpassed popular code completion tools like GitHub Copilot and Amazon CodeWhisperer.
From Research to Real-World Implementation
Now, as with any groundbreaking research, there are a few nuances to consider when translating MLCopilot into real-world applications (considering the potential of its immediate integration into GitHub Copilot).
The benchmarks used in testing had predefined solution spaces. This allowed for simpler retrieval of knowledge. However, in tools like GitHub Copilot, where the solution space might be unknown, the knowledge-retrieval aspect of MLCopilot would require modification - something similar to the experience-retrieval process.
Also, before taking MLCopilot to production, it would be imperative to scale the experience & knowledge pools to the vast categories of tasks, which were not covered by these benchmarks.
MLCopilot isn't intended to replace existing AutoML approaches entirely. Instead, the researchers envision a future where MLCopilot and established AutoML methods join forces, opening up a fascinating avenue for exploration.
For our enthusiastic geeks who are desperate to try out the capabilities of MLCopilot hands-on, check out this GitHub repository.
You can start using it as your assistant right away, by providing it your OpenAI API key.
Spark and Trouble are beyond excited about this groundbreaking innovation, and can't wait to see how MLCopilot evolves and gets integrated into actual products (GitHub Copilot first! 🤞), empowering developers, data scientists, and researchers with unprecedented capabilities in tackling machine learning challenges.
Stay tuned for more hot-off-the-wires innovations that are shaping the future of AI and technology!
10x Your Workflow with AI 📈
Work smarter, not harder! In this section, you’ll find prompt templates 📜 & bleeding-edge AI tools ⚙️ to free up your time.
Fresh Prompt Alert!🚨
Ever dreamt of conquering a new country 🌎, but planning that epic trip feels like navigating a jungle with a sprained ankle? 😩
Spark & Trouble know the struggle of crafting the perfect itinerary, so they’re sharing today’s fresh prompt template to ditch the generic tours and whip up a killer itinerary for your next adventure.
Let's get planning 👇
You are my travel agent. Please plan a travel itinerary for [#] days and [#] nights in [COUNTRY] where we would fly into [CITY] and depart from [CITY]. I will be traveling with [TRAVEL COMPANION] during [DATES/TIME OF YEAR].
We definitely want to visit [CITY], [CITY], and [CITY].
Please include means of travel, estimated cost of travel, and estimated time to travel between locations.
Please also include 3 attractions we should check out in each city, the costs associated with each, and the hours of operation.
We enjoy things like [ACTIVITY], [ACTIVITY], and [TYPE OF EXPERIENCE]. All-in, we would like to spend less than [$$$].
Ask me five questions that would help you do a better job of helping me create an itinerary.
3 AI Tools You JUST Can't Miss 🤩
Spark 'n' Trouble Shenanigans 😜
Last week, our buddies Spark & Trouble decided to go creative…
Infinite zoom art (created by authors)
Do you wish to create such mesmerizing infinite zoom art as well?😮
That too for FREE, without any video editing skills, in just a few minutes!🤯
An infinite zoom art is a visual art technique that creates an illusion of an infinite zoom-in or zoom-out on an image.
Up till now, such intriguing effects could only be created by experienced cinematographers & video editors. But today, people like you & me, with just our creativity (& some cutting-edge AI tools), can generate such powerful visuals in a matter of minutes, without spending a single rupee! 🔥
Here's how you too can create your first amazing zoom-in/zoom-out video (For links, refer to the comments below):
✅ Install "AUTOMATIC1111 Stable Diffusion WebUI"
✅ Install the "Infinite Zoom Extension" in AUTOMATIC1111
✅ Download the "DreamShaper Inpainting" Model Checkpoint (this model is great for modifying images in the genre of concept art, digital art, anime, etc.)
✅ Set the total video length
✅ Fill in the ‘common’ portion of the prompt, i.e., the style keywords, which stay the same throughout the video
✅ Now, start filling in the various subjects and their timings that you wish to appear in your infinite zoom (𝘵𝘩𝘪𝘴 𝘪𝘴 𝘸𝘩𝘦𝘳𝘦 𝘺𝘰𝘶 𝘨𝘦𝘵 𝘤𝘳𝘦𝘢𝘵𝘪𝘷𝘦!)
✅ Hit “Generate Video”!
... and within minutes, your crazy infinite zoom video would be in front of you! How awesome is that! 🤩
Well, that’s a wrap! Until then, |
Reply