Never Miss a Detail: How Otter AI Can Save You From Meeting Misery

Heard the buzz about this new Open Source LLM?

Howdy fellas!

Trouble’s untangling the potential of the latest open LLM, while Spark’s been lighting up over a new AI meeting assistant—looks like we’ve got a ‘charged’ edition ahead!

Here’s a sneak peek into this week’s edition 👀

  • Decoding Otter AI’s OtterPilot for Meeting Automation

  • Databricks released this new SoTA open LLM - DBRX

  • 3 AI Tools You JUST Can't Miss!

  • Steal this prompt that can help you craft professional, empathetic, and clear emails

Time to jump in!😄

PS: Got thoughts on our content? Share 'em through a quick survey at the end of every edition It helps us see how our product labs, insights & resources are landing, so we can make them even better.

Product Labs🔬: Decoding “Otter AI’s OtterPilot”

Ever zoned out in a meeting, frantically scribbling indecipherable chicken scratches while a goldmine of ideas floats by? Or maybe you're the designated notetaker, emerging from the conference room with carpal tunnel and a desperate need for a nap?

Those days are over, my friends. Today, we dive into the otter-ly amazing world of Otter AI, the AI-powered meeting assistant.

What’s in it for you?

Otter AI is not new to the game - it was actually established in 2016 (it was called AISense then). With the advent of generative AI, the company began to go by the name Otter AI.

In February 2023, Otter AI launched its AI meeting assistant called OtterPilot. It can generate transcriptions, capture any slides shared, and also generate a summary.

The fun doesn't stop there. Otter integrates with your calendar, automatically joining virtual meetings, saving you the scramble. It integrates with Teams, Google Meet, & Zoom. Sharing notes and action items becomes a breeze, keeping everyone on the same page and moving forward. Oh, and did we mention that Otter can integrate with Salesforce and Slack as well and not just video conference applications?

OtterPilot is available as a browser extension (a small letdown being it’s only available on Chrome). Install the extension & enable OtterPilot in your meetings. The caveat here is you have to join the meeting via the browser and lo behold! You have an additional attendee, OtterPilot. The summary & transcripts generated can be accessed in Otter AI’s web dashboard.

Hard(ly) at work 😋 to roll out this edition, aided by OtterPilot

Now, what makes Otter different is that while most AI-based meeting assistants allow you to only ask questions in the current meeting, Otter can go over all the previous meetings and you can ask anything. Otter AI Chat generates answers using all the conversations you have access to - this includes your personal conversations ("My Conversations"), conversations shared with you ("Shared with Me"), and those shared to your Workspace.

To understand how an answer was derived, you can click on the "View Sources" button. This shows you the exact conversations Otter AI Chat used to formulate its response, allowing you to delve deeper into those conversations for more context.

(Using a borrowed image here) Otter AI Chat takes stock of all the meetings in the given timeline to generate an answer

Otter AI reminds us of Santa’s bag as it is overloaded with an amazing number of features. 🎅🏼

Similar to Slack channels, you can create AI channels on Otter. Each channel is only for one meeting creating a dedicated space to discuss that meeting. You may not be able to host a meeting via Otter, but you can paste a meeting link to record in the Channel and also record an audio conversation. Now, you can tag Otter in the chat and summarize, generate action items, or ask any questions about that meeting.

The AI channel for Vision Debugged - the summary and outline of the meeting and on the right Otter AI chat can be used for any questions

If you thought, Otter was only to transcribe meetings you are humbly mistaken. You can also upload video files and generate transcripts for them. Not just generate transcript, all the features of Otter are available same as any recorded meeting.

Well, you can also extend the same to YouTube videos, record the audio, and proceed to use Otter for summarizing it.

Otter also has some specialized features for specific scenarios:

  • Sales: OtterPilot for Sales automatically extracts Sales Insights, writes follow-up emails, and pushes call notes to Salesforce and HubSpot.

  • Education: In virtual sessions, Otter records audio and automatically takes notes in real-time. It also automatically captures lecture slides and adds them to the notes.

Now the bigger question - Why Otter?

There are a few other AI-based meeting assistants out there - Google Duet, Gong, M365 Copilot, and Zoom AI Companion.

There are a few areas where Otter shines. Duet, M365 Copilot & Zoom AI are all tightly coupled with the parent applications i.e. Google Meet, Teams & Zoom, while Otter can be used in conjunction with any application. More so, Otter also integrates with other third-party applications such as Salesforce & Slack.

Another prominent feature of Otter is that it can generate context from all the past meetings and is not limited to the current context.

M365 Copilot has almost all the features of Otter and is actually in some ways better as it is integrated with the full M365 suite. However, there is no free version.

One drawback of Otter though is that it works only in English, while Duet AI is multilingual.

While Spark & Trouble are a two-person team and may not fully understand the pains of multiple long meetings with large teams, they still hope Otter AI can help make your lives a lot easier.

Hot off the Wires 🔥

We're eavesdropping on the smartest minds in research. 🤫 Don't miss out on what they're cooking up! In this section, we dissect some of the juiciest tech research that holds the key to what's next in tech.⚡

“We’ve surpassed everything!” That’s what Jonathan Frankle (chief neural network architect at Databricks) exclaimed - and why wouldn’t he? After investing ~$10 million & months of tireless efforts, his amazing team recently launched DBRX (pronounced as “DB-Rex”), a general-purpose large language model (LLM), and the ripples of its impact as the state-of-the-art open model can be felt throughout the industry.

You can find the weights of the DBRX Base & the fine-tuned DBRX Instruct models on Hugging Face. Think of the base model as a talented but general student. An instruction fine-tuned model is like that same student after lots of practice following specific instructions (like “answering MCQs”), making it better at certain tasks.

Where does DBRX Shine?

Developed by Mosaic AI Research (a leading Gen AI platform that released the MPT LLMs, and was acquired by Databricks last spring), DBRX surpassed big names like Mixtral, LLaMa 2, Grok-1 & GPT-3.5 on standard benchmarks for language understanding, programming & math, while offering stiff competition to closed-source giants like Claude 3, Gemini 1.5 Pro & GPT-4.

Comparison of DBRX against other Open Models & GPT-3.5 across various benchmarks
(Source: created by authors based on data from Databricks blog)

Another cool aspect is its ability to process & generate long chunks of text (it was trained with a large context size of 32,000 tokens (compared to ~16,000 tokens in GPT-3.5). This propels DBRX’s capabilities in Retrieval Augmented Generation (RAG) tasks that involve finding relevant content from a database before an LLM answers your question.

On the RAG benchmarks, DBRX Instruct outperformed GPT-3.5 & other open models, while being competitive with results from GPT-4 (source: Databricks blog)

Under the Hood

At a high level, here are some of the interesting technical tidbits to know about DBRX:

  • It is a transformer-based decoder-only LLM, i.e., it takes a sequence of input tokens (words or parts of words) & predicts the next token, one at a time, without a separate component to process (encode) the input.

  • DBRX uses a fine-grained Mixture-of-Experts or MoE architecture with 132B parameters (think of this as an ensemble of smaller specialist models where only the relevant specialists handle any given input, ensuring efficient & expert replies). Compared to Mixtral & Grok-1, which choose 2 out of the 8 available experts, DBRX relies on 16 smaller experts & selects the top 4 for a given input - that’s a whopping 65 times more possible combinations! (hint: apply combinatorics - 16C4 / 8C2)

  • It uses several innovative features to enhance its capabilities:

    • Rotary Position Encodings (RoPE) help the model see how the pieces fit together better, by giving it a clearer sense of where each word is in the sentence

    • Gated Linear Units (GLUs) act like filters, letting the model focus on the most important information for each piece of the puzzle

    • Grouped Query Attention (GQA) speeds things up by letting the model work on smaller sections of the puzzle at a time, without losing the big picture

  • DBRX has been trained on a mind-boggling 12 trillion tokens (6 times more than LLaMa), and the quality of that data is supposedly 2x better than what Mosaic AI used for their impressive MPT models.

Why does this matter?

With surveys revealing that 60% of AI leaders are interested in switching to open-source LLMs once they match the performance of closed models, DBRX becomes a strong contender for mass adoption. It’s become the talk of the town & folks have found it to handle super-niche topics really well without hallucinating - thanks to the ‘experts’ combinations from its MoE architecture.

Moreover, DBRX is extremely fast & cost-effective, enabling customers to build & deploy production-grade Gen AI apps rapidly.

While Spark & Trouble keep a close tab on more fascinating details about DBRX, you can play around with it on AWS, Google Cloud & Azure Databricks, or try it on early-adopter platforms like, Perplexity Labs & Poe.

DBRX-Instruct Model available on & Perplexity Labs as a custom model

Check out this guide to get started with DBRX.

10x Your Workflow with AI 📈

Work smarter, not harder! In this section, you’ll find prompt templates 📜 & bleeding-edge AI tools ⚙️ to free up your time.

3 AI Tools You JUST Can't Miss 🤩

  • SciSpace - Do hours worth of literature review in minutes with this AI research assistant

  • Collato - Capture your product ideas, and meeting transcripts & generate documents in seconds

  • Jupitrr - Create marketing videos in a flash with stock footage, subtitles, and much more

Fresh Prompt Alert!🚨

Let's face it, crafting professional emails can be a drag.

Spark, the PM whiz, can whip out compelling emails like a magician pulling rabbits from a hat. However, as a data geek, Trouble is never keen on writing professional emails. But for data geeks like Trouble, the struggle is real.

And, we all know the perils of generic AI-generated emails, right? They're either longer than a Tolstoy novel or scream "ROBOT WROTE THIS" from the subject line.

But fear not! Here’s a prompt that'll have you crafting professional, empathetic, and clear emails like a boss…

Write an email that is [communication style 1], [communication style 2] & [communication style 3]

Be clear and concise. Do not hedge or qualify. Avoid jargon. Do not use emojis or exclamation points.

I am a [job title or relevant role for your email] at [company or on project]. I describe myself as [description about you].

I am writing this email to [recipient]. Some background information about them is [recipient’s LinkedIn header]. My goal for this email is to [goal].

Here is a previous email from them. Please customize and match their style of writing.
[insert their previous email, if replying to them]

* Replace the content in brackets with your details. Communication styles may be professional, charismatic, memorable, powerful, friendly, clear, etc.

Spark 'n' Trouble Shenanigans 😜

The Weekly Chuckle

ML engineers these days (source:

Well, that’s a wrap!
Thanks for reading 😊

See you next week with more mind-blowing tech insights 💻

Until then,
Stay Curious🧠 Stay Awesome🤩

PS: Do catch us on LinkedIn - Sandra & Tezan


or to participate.