Speech To Note

Product details

Q: Own API Keys and Anthropic Claude: 1.)

Can we add our own API Key for GPT?
This would mean better privacy for us (and we would be billed on the own API key for AI use, but I would still much prefer to have that option).

2.) Could you add Anthropic Claude and let us choose the AI model for our custom prompts if we provide our own API key for it?
While your Outputs are good and we can tune them with customs prompts, I did a lot of comparing between GPT4, 4 Turbo and the new Claude 3 models (Haiku is fast but still better than all GPT-models for summaries, Sonnet is much better and OPUS is so much better you will never want to go back to GPT4 for summaries after using it). It's a day and night difference, Claude 3 Opus correctly has all the fine details and most important things, even for very tricky notes with many topics.
And for some situations, GPT totally fails to accurately output what I was talking about. It's good for structured speech like lectures I give. But If I talk to my voice recorder with an idea, I sometimes talk about smaller details or other topics before getting back to my main idea, and GPT totally misses the point then or even gets the main topic wrong. Claude 3 OPUS never failed me, it accurately gets it right, even if the first 2 minutes matter, the next 8 minutes are unimportant and all the main topics follow in the last 3 minutes. Even for a 50 minute note. Claude is so good, I copy the transcript to their Website, let it summarize it there, and paste it back to Speech To Note (tanks for letting us edit the Output!).
So, could you add Claude for custom prompts if we bring our own API key? It should be fairly easy, since calling their API works almost like for OpenAI — you just have to let us add our API key, provide a Dropdown for the model (GPT 4 or Claude 3 OPUS, and later Claude models like Haiku vs Sonnet vs Opus) and call their endpoint. It is so good, your other customers will love it.
Thanks!

user3500PLUSApr 7, 2024
Founder Team
abhishek_ux

abhishek_ux

May 14, 2024

A: Hello,
We definitely have plans to include an option where you will be able to use your own APIs, whether it's from GPT or whether it's from Anthropic. I don't have a timeline for this because we don't have it in priority as of now, due to the fact that the rest of the things are still pending from our end.
Once the backlog has been taken care of, then we will start thinking about how to integrate it in a way that is going to keep us sustainable as well, and it's going to be beneficial for users as well. So, yeah, we are looking into the possible scenarios of how to go about it, but it's definitely in our plans, definitely not long-term plans.
We'll let you know about it once we start working on it. But yeah, thank you so much for your overall feedback here.

Share
Helpful?
Log in to join the conversation
Verified Purchaser badge

Verified purchaser

Posted: Apr 9, 2024

Hello and thanks a lot for your reply!
That's good to hear. And it should be very easy and fast to implement for you:

a.) GPT4Turbo / Open AI: Let us enter our own API key in a variable in the settings. Whenever you call the AI, add a single line, testing if we entered an API-key. If so: put it in the query.
Done.
And maybe one added line of error handling: if the response is "invalid API key" let us know we made a mistake while pasting (or test this with a short query when we save to API key, but still add the error handling in case we accidentally deleted the API key later on the OpenAI side).

b.) To later add Anthropic:
Let us enter an API key in a second variable in the settings.
Give us a drop-down for each custom format for "model": GPT4 Turbo vs Claude 3 Haiku, Claude 3 Sonnet, Claude 3 Opus. To unlock Sonnet and Opus: Test if we entered an API Key for Anthropic, since Opus is more expensive (but still totally worth it because of the fantastic results).
Then when you call the AI, all the syntax etc is almost identical: If model is GPT4Turbo, call OpenAI's Endpoint. If it's any Claude model, change API-endpoint/URL, Done.
All Claude models have longer context windows (200k) and can also output 4096 tokens, so any prompt/query that currently works will also work with Claude.

c.) And you might want to give users who don't bring an API key access to Claude Haiku: Happier users and less costs for you.
It's faster than GPT4 turbo, while a Token costs less than GPT 3.5 Turbo. For Input Tokens, the price is 2,5% of GPT 4 Turbo (yes. 0.25 USD/1 Mio Tokens vs 10 USD/1M Tokens = GPT 4 Turbo is 40 times more expensive). For output tokens, it's 1.25 USD/1M Tokens vs 30 USD/1M Tokens (GPT 4 Turbo is more than 20x more expensive for each Output token).
But Haiku is not only way cheaper and faster, it is MUCH better for summaries. I did a lot of testing in March and was blown away. I need AI to create very detailed technical summaries of large, complex presentations where every detail matters, and Haiku is very good already. I often use Haiku because it only takes 7-9 seconds to return a summary for a 20 minute text (where Opus needs 40-50 seconds).

The best Claude 3 model, Opus, is even better, but needs 5-6 times longer. And beware, Opus is more expensive than GPT 4 Turbo (for me, it's till worth every Penny. So give this to people with their own API keys only, but release Haiku for everyone. That will keep your costs down and customers happy because of faster and more accurate summaries). For a well structured presentation the quality of Haiku is already awesome.

Jack.SJack.SPLUS
Verified Purchaser badge

Verified purchaser

Posted: Jun 18, 2024

Is there any new news on this matter❓