Sam is a startup founder, with an AI/ML background, a 6 year old son with whom he speaks in Mandarin, and some interesting stories from his first startup, as wellSam is a startup founder, with an AI/ML background, a 6 year old son with whom he speaks in Mandarin, and some interesting stories from his first startup, as well

Meet the Writer: Two-Time Founder Sam Bhattacharyya on Accidentally Finding Product-Market Fit


Let’s start! Tell us a bit about yourself. For example, name, profession, and personal interests.

My name is Sam. I grew up in the US and have a background in old-school AI/ML from Columbia and MIT. After grad school, I started my first startup (an e-learning app in West Africa), which we pivoted 10 different times [1][2][3] until we got acquired by a customer during the pandemic.

I worked as the head of AI for the acquirer (a company called Hopin in 2024, when it was acquired by a private equity firm. I then started my 2nd startup, Katana (AI Video Editing), it’s going ~~super well~~, ~~terribly~~, and unexpectedly

Profession-wise, I’m not sure I neatly fit into any clean category. I definitely have an engineering/CS/technical background, I’m definitely a founder, and on paper I’m CEO, but really I’ve spent my time, both in my startups and when working for Hopin, doing a bit of everything (writing code, talking to customers, training AI models, marketing, HR/Management).

I don’t necessarily do all of those things well. I find it easier to prototype an idea than to build a stable production app. I find content marketing and talking to customers easier to do than b2b sales. If there’s one thing I’m particularly good at, it’s highly experimental tech ideas (like, I invented/patented a new video compression algorithm), and more recently, I’ve found a niche in developing really efficient AI models, especially those that run in the browser and those that deal with video/computer vision.

Like, I’m not a good AI researcher, but I can look at a real-world user problem, decide that none of the established research/open-source stuff is adequate, build custom architectures/training strategies for that specific user problem, and deploy the results into a working product with customers, and that approach has been the core of the last 3 tech products I built (AI Filters SDK, Free video Upscaling tool, AI Video Editing app).

On the personal front, I’m Indian-American, and I currently live in Mexico with my wife (who is Mexican) in a town called Querétaro. I have a 6-year-old son who really likes Minecraft. If I have one hobby, it’d be learning languages - I speak Spanish and Mandarin fluently, and actually speak to my son in Mandarin.

Interesting! What was your latest Hackernoon Top Story about?

My latest HackerNoon top story, How a Demo Page for my Abandoned Open Source SDK Accidentally Found Product Market Fit, was about how my 2nd startup is going - my first product (an AI Video Editing app) is going okay. Still, recently it’s been overshadowed by a random side project (or more specifically, the demo page for an open source AI Upscaling SDK) which unexpectedly got product market fit by itself, and which now has ~100,000 Monthly Active Users, which is 200x the traffic that my “actual” startup product has.

Do you usually write on similar topics? If not, what do you usually write about?

I have written quite a bit in the past, mostly about my last startup. I primarily used to write on Medium (you can see my past articles here) and I just wrote them organically, some of it was about specific projects we were working on, some of it was just my own re-counting of the startup journey, and the given the fact that we launched out of MIT, moved between Accra Ghana, Lagos Nigeria, Bangalore India, Querétaro Mexico, Boston, New York and San Francisco, pivoted 10 times, nearly went bankrupt 3 times, got a video compression patent, unexpectedly found product market fit and were acquired by Europe’s fastest growing startup all within 5 years, I feel like we genuinely had some interesting stories to tell.

It was only after this article, about how our AI Filters SDK was 10x more performant than Google’s own open source models, that we got inbound organic customer interest.

I often also have been writing update emails to my first startup’s investors, and now my current startup’s investors, although those are emails, the style, tone, and length are similar to the article I just wrote.

I do want to write more stuff going forward, mostly about the stuff I’m working on. My current adventure with a random open source up-scaling tool has motivated me to re-consider a few more interesting open source projects I’ve wanted to work on (an open source version of my first startup, and I swear, if you can believe me, I believe I have developed an algorithm than can improve the speed & accuracy of transcription models by an order of magnitude).

Those are things which I might write both academic papers for, but also developer-friendly explanations of how they work and why they matter.

I’ve also worked on some random side projects in the past, which have paired well with an article I wrote - like I feel like this article I wrote a few years ago could have been of interest to hackernoon readers. For me, the primary goal isn’t the writing in and of itself, it’s more - work on genuinely interesting things, and writing is a vehicle to tell the world about it.

Great! What is your usual writing routine like (if you have one?)

I don’t really have a writing ‘routine’ as a defined ‘process’ that I stick to, though maybe I should if I do more of it.

For almost every article I’ve written, there’s been some kind of story to tell, and I usually just write from start to finish in a ‘train of thought’ manner.

This tends to mirror my work style in general. I like it when I work intensely, deeply focused on one task until it’s done. This works just as well for writing articles as it does for writing code. Just like with writing code, yeah, I don’t do JIRA, I just wing it.

After years of writing investor emails, pitch decks, and blog posts, I’ve instinctively just internalized how to structure stories and narratives - I don’t really plan or storyboard it; if there’s any planning, I just write some bullet points down (the headers) and just start writing. I add images and graphics that I think make sense as I write it.

I hardly do any editing afterwards, though more recently I’ve added spell checking as a formal step because I find myself writing so fast that I just make typos left, right, and center.

That’s also how you’ll know none of my stuff is AI-generated 😅, I have too many typos for anyone to think my articles were generated by an LLM.

Being a writer in tech can be a challenge. It’s not often our main role, but an addition to another one. What is the biggest challenge you have when it comes to writing?

I don’t really have any complaints about writing. I think the hardest part is just budgeting time to do it, because it isn’t my main role, it’s in service to whatever it is I’m working on, but honestly, for me, it’s part of the job description anyway.

Like, sure, debugging code (especially AI-generated code) takes time, but it has to be done, and it just takes time.

What is the next thing you hope to achieve in your career?

I want a project, other than this upscaling tool that randomly found product market fit, to work out. Like, again, going back to the transcription thing I was talking about, if I really could build something that could advance the state of the art of an industry by an order of magnitude, if the world’s largest companies used my algorithm/software, that’s a career-defining kind of thing.

Wow, that’s admirable. Now, something more casual: What is your guilty pleasure of choice?

I miss my childhood. Adulting is hard, and for a while, I had dreamed of curling up in my bed and playing Pokémon like I did when I was 8.

At some point, I was sick last year during the holidays while visiting my parents’ house. I pulled out the same old Gameboy I used when I was eight, and I tried playing Pokémon Red, and… I couldn’t. Too much time has passed, my brain has changed, and I just don’t have the patience to play a video game where it’s like “Go on a quest to do xyz”, like my real life is full of to-do lists. I don’t really have the patience to chase down 150 Pokémon when I’m already frustrated with chasing down the 10 items my wife asked me to pick up from the grocery store.

My only solace was listening to old video game music, which was all the nostalgia without any of the effort. Whoever made a lo-fi version of the original pokemon sound track, I want to truly and deeply thank you.

Do you have a non-tech-related hobby? If yes, what is it?

Like I mentioned, learning languages. I’m fluent in Mandarin and Spanish, and I speak to my son in Mandarin. I have tried to pick up others (I have some level of proficiency in Portuguese, French, and Italian, but I always mix them up). I have tried learning Japanese and Arabic, but it’s just where’s the time?

I at some point realized that if I don’t constantly maintain my Mandarin, it will decay, and I’d rather be fluent in one language than be barely conversational in 5. To that end, my little free time watching movies/videos is all in Mandarin.

Honestly, I watch tons of kid shows (Peppa Pig, Blippi) in Chinese. I used to watch those shows with my son, but now I watch them by myself

(1) Because he’s too old for them now, and

(2) because those shows have by far the most practical vocabulary. Like, I can read the New York Times in Chinese, and I used to, but when am I going to talk to my son about the Ukraine-Russia conflict or the Trump presidency?

I do need to tell him to wash his hands, though, and kids’ shows like Peppa Pig and Blippi are the only places where you’ll regularly encounter normal, real-world household vocabulary like “Water faucet” and “Bicycle pedal.”

I have a 6-year-old son, so I don’t really have time for my own hobbies; any free time is for his hobbies. He really likes playing Minecraft, though, so if playing video games counts as legitimate 21st-century Father/Son bonding, that’s a win for me.

What can the Hacker Noon community expect to read from you next?

I have some ideas for articles. Like for one, I was going to eventually write an open, insider’s perspective on the rise and fall of Hopin (formerly Europe’s fastest growing startup)

I hope one of the other projects also works out, and if they do, you’ll certainly hear from me. I may also port (and update) some of my older interesting side projects here.

What’s your opinion on HackerNoon as a platform for writers?

I am surprised by how democratic this is. Most social media platforms have algorithms that can make it feel ‘random’ as to what posts and content do well and what doesn’t, especially when you don’t already have a big network/audience.

For someone without a big pre-established audience, I find it genuinely refreshing and hopeful that there is a platform that both has incredible reach and readership, and provides a platform for good content, even if it’s from writers who don’t have a big established network or audience themselves.

It motivates me to actually write more, genuinely interesting stuff - previously, the barrier was always “I can write this interesting thing but it’ll just go into the void”, and having a platform where good content can actually bubble up to a real audience is appreciated for someone just getting started.

Thanks for taking the time to join our “Meet the writer” series. It was a pleasure. Do you have any closing words?

See you on the next post!

Market Opportunity
FIT Logo
FIT Price(FIT)
$0.00004777
$0.00004777$0.00004777
+0.04%
USD
FIT (FIT) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Summarize Any Stock’s Earnings Call in Seconds Using FMP API

Turn lengthy earnings call transcripts into one-page insights using the Financial Modeling Prep APIPhoto by Bich Tran Earnings calls are packed with insights. They tell you how a company performed, what management expects in the future, and what analysts are worried about. The challenge is that these transcripts often stretch across dozens of pages, making it tough to separate the key takeaways from the noise. With the right tools, you don’t need to spend hours reading every line. By combining the Financial Modeling Prep (FMP) API with Groq’s lightning-fast LLMs, you can transform any earnings call into a concise summary in seconds. The FMP API provides reliable access to complete transcripts, while Groq handles the heavy lifting of distilling them into clear, actionable highlights. In this article, we’ll build a Python workflow that brings these two together. You’ll see how to fetch transcripts for any stock, prepare the text, and instantly generate a one-page summary. Whether you’re tracking Apple, NVIDIA, or your favorite growth stock, the process works the same — fast, accurate, and ready whenever you are. Fetching Earnings Transcripts with FMP API The first step is to pull the raw transcript data. FMP makes this simple with dedicated endpoints for earnings calls. If you want the latest transcripts across the market, you can use the stable endpoint /stable/earning-call-transcript-latest. For a specific stock, the v3 endpoint lets you request transcripts by symbol, quarter, and year using the pattern: https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={q}&year={y}&apikey=YOUR_API_KEY here’s how you can fetch NVIDIA’s transcript for a given quarter: import requestsAPI_KEY = "your_api_key"symbol = "NVDA"quarter = 2year = 2024url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={API_KEY}"response = requests.get(url)data = response.json()# Inspect the keysprint(data.keys())# Access transcript contentif "content" in data[0]: transcript_text = data[0]["content"] print(transcript_text[:500]) # preview first 500 characters The response typically includes details like the company symbol, quarter, year, and the full transcript text. If you aren’t sure which quarter to query, the “latest transcripts” endpoint is the quickest way to always stay up to date. Cleaning and Preparing Transcript Data Raw transcripts from the API often include long paragraphs, speaker tags, and formatting artifacts. Before sending them to an LLM, it helps to organize the text into a cleaner structure. Most transcripts follow a pattern: prepared remarks from executives first, followed by a Q&A session with analysts. Separating these sections gives better control when prompting the model. In Python, you can parse the transcript and strip out unnecessary characters. A simple way is to split by markers such as “Operator” or “Question-and-Answer.” Once separated, you can create two blocks — Prepared Remarks and Q&A — that will later be summarized independently. This ensures the model handles each section within context and avoids missing important details. Here’s a small example of how you might start preparing the data: import re# Example: using the transcript_text we fetched earliertext = transcript_text# Remove extra spaces and line breaksclean_text = re.sub(r'\s+', ' ', text).strip()# Split sections (this is a heuristic; real-world transcripts vary slightly)if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1)else: prepared, qna = clean_text, ""print("Prepared Remarks Preview:\n", prepared[:500])print("\nQ&A Preview:\n", qna[:500]) With the transcript cleaned and divided, you’re ready to feed it into Groq’s LLM. Chunking may be necessary if the text is very long. A good approach is to break it into segments of a few thousand tokens, summarize each part, and then merge the summaries in a final pass. Summarizing with Groq LLM Now that the transcript is clean and split into Prepared Remarks and Q&A, we’ll use Groq to generate a crisp one-pager. The idea is simple: summarize each section separately (for focus and accuracy), then synthesize a final brief. Prompt design (concise and factual) Use a short, repeatable template that pushes for neutral, investor-ready language: You are an equity research analyst. Summarize the following earnings call sectionfor {symbol} ({quarter} {year}). Be factual and concise.Return:1) TL;DR (3–5 bullets)2) Results vs. guidance (what improved/worsened)3) Forward outlook (specific statements)4) Risks / watch-outs5) Q&A takeaways (if present)Text:<<<{section_text}>>> Python: calling Groq and getting a clean summary Groq provides an OpenAI-compatible API. Set your GROQ_API_KEY and pick a fast, high-quality model (e.g., a Llama-3.1 70B variant). We’ll write a helper to summarize any text block, then run it for both sections and merge. import osimport textwrapimport requestsGROQ_API_KEY = os.environ.get("GROQ_API_KEY") or "your_groq_api_key"GROQ_BASE_URL = "https://api.groq.com/openai/v1" # OpenAI-compatibleMODEL = "llama-3.1-70b" # choose your preferred Groq modeldef call_groq(prompt, temperature=0.2, max_tokens=1200): url = f"{GROQ_BASE_URL}/chat/completions" headers = { "Authorization": f"Bearer {GROQ_API_KEY}", "Content-Type": "application/json", } payload = { "model": MODEL, "messages": [ {"role": "system", "content": "You are a precise, neutral equity research analyst."}, {"role": "user", "content": prompt}, ], "temperature": temperature, "max_tokens": max_tokens, } r = requests.post(url, headers=headers, json=payload, timeout=60) r.raise_for_status() return r.json()["choices"][0]["message"]["content"].strip()def build_prompt(section_text, symbol, quarter, year): template = """ You are an equity research analyst. Summarize the following earnings call section for {symbol} ({quarter} {year}). Be factual and concise. Return: 1) TL;DR (3–5 bullets) 2) Results vs. guidance (what improved/worsened) 3) Forward outlook (specific statements) 4) Risks / watch-outs 5) Q&A takeaways (if present) Text: <<< {section_text} >>> """ return textwrap.dedent(template).format( symbol=symbol, quarter=quarter, year=year, section_text=section_text )def summarize_section(section_text, symbol="NVDA", quarter="Q2", year="2024"): if not section_text or section_text.strip() == "": return "(No content found for this section.)" prompt = build_prompt(section_text, symbol, quarter, year) return call_groq(prompt)# Example usage with the cleaned splits from Section 3prepared_summary = summarize_section(prepared, symbol="NVDA", quarter="Q2", year="2024")qna_summary = summarize_section(qna, symbol="NVDA", quarter="Q2", year="2024")final_one_pager = f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks — Key Points{prepared_summary}## Q&A Highlights{qna_summary}""".strip()print(final_one_pager[:1200]) # preview Tips that keep quality high: Keep temperature low (≈0.2) for factual tone. If a section is extremely long, chunk at ~5–8k tokens, summarize each chunk with the same prompt, then ask the model to merge chunk summaries into one section summary before producing the final one-pager. If you also fetched headline numbers (EPS/revenue, guidance) earlier, prepend them to the prompt as brief context to help the model anchor on the right outcomes. Building the End-to-End Pipeline At this point, we have all the building blocks: the FMP API to fetch transcripts, a cleaning step to structure the data, and Groq LLM to generate concise summaries. The final step is to connect everything into a single workflow that can take any ticker and return a one-page earnings call summary. The flow looks like this: Input a stock ticker (for example, NVDA). Use FMP to fetch the latest transcript. Clean and split the text into Prepared Remarks and Q&A. Send each section to Groq for summarization. Merge the outputs into a neatly formatted earnings one-pager. Here’s how it comes together in Python: def summarize_earnings_call(symbol, quarter, year, api_key, groq_key): # Step 1: Fetch transcript from FMP url = f"https://financialmodelingprep.com/api/v3/earning_call_transcript/{symbol}?quarter={quarter}&year={year}&apikey={api_key}" resp = requests.get(url) resp.raise_for_status() data = resp.json() if not data or "content" not in data[0]: return f"No transcript found for {symbol} {quarter} {year}" text = data[0]["content"] # Step 2: Clean and split clean_text = re.sub(r'\s+', ' ', text).strip() if "Question-and-Answer" in clean_text: prepared, qna = clean_text.split("Question-and-Answer", 1) else: prepared, qna = clean_text, "" # Step 3: Summarize with Groq prepared_summary = summarize_section(prepared, symbol, quarter, year) qna_summary = summarize_section(qna, symbol, quarter, year) # Step 4: Merge into final one-pager return f"""# {symbol} Earnings One-Pager — {quarter} {year}## Prepared Remarks{prepared_summary}## Q&A Highlights{qna_summary}""".strip()# Example runprint(summarize_earnings_call("NVDA", 2, 2024, API_KEY, GROQ_API_KEY)) With this setup, generating a summary becomes as simple as calling one function with a ticker and date. You can run it inside a notebook, integrate it into a research workflow, or even schedule it to trigger after each new earnings release. Free Stock Market API and Financial Statements API... Conclusion Earnings calls no longer need to feel overwhelming. With the Financial Modeling Prep API, you can instantly access any company’s transcript, and with Groq LLM, you can turn that raw text into a sharp, actionable summary in seconds. This pipeline saves hours of reading and ensures you never miss the key results, guidance, or risks hidden in lengthy remarks. Whether you track tech giants like NVIDIA or smaller growth stocks, the process is the same — fast, reliable, and powered by the flexibility of FMP’s data. Summarize Any Stock’s Earnings Call in Seconds Using FMP API was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story
Share
Medium2025/09/18 14:40
Ripple IPO Back in Spotlight as Valuation Hits $50B

Ripple IPO Back in Spotlight as Valuation Hits $50B

The post Ripple IPO Back in Spotlight as Valuation Hits $50B appeared first on Coinpedia Fintech News Ripple, the blockchain payments company behind XRP, is once
Share
CoinPedia2025/12/27 14:24
Solana co-founder predicts that by 2026: the stablecoin market will exceed one trillion US dollars, and 100,000 humanoid robots will be shipped.

Solana co-founder predicts that by 2026: the stablecoin market will exceed one trillion US dollars, and 100,000 humanoid robots will be shipped.

PANews reported on December 27th that Anatoly Yakovenko, co-founder of Solana, released some predictions about 2026 on X, as follows: The total size of stablecoins
Share
PANews2025/12/27 15:04