Q: GPT-4 Turbo has 128k windows aka 10x better than the old GPT-4, can we use it with this software soon or do I need to initiate a refund?
Pawel_NeuronWriter
May 14, 2024A: Hello
We are switching GPT-4 to GPT-4 Turbo soon.
Hoping it will be better, but we do not really know. OpenAI also doesn't help, communicating that the production (stable) version will be available in a few weeks.
How do you know it's 10x better? Please share the source, happy to read it.
I didn't find any objective comparisons, primarily related to better-quality output from GPT-4 Turbo. My own tests are quite disappointing, especially if you look at learning data. Asking about Lisa Marie Presley, who died on 12 January 2023, we have this answer from GPT-4 Turbo: As of my last update in March 2023, Lisa Marie Presley was alive.
You can refund NEURON any time in the 60-day window. No questions asked.
Hope this helps ;)
Awesome, thanks for the prompt reply. I said 10x since we already know the window token count is about 10x. So theoretically it is processing (outputting) 10x more at a time, thus able to generate 10x longer cohesive/accurate content since I think previous API was 8 or 16k windows (only azure had 32 for company accounts right)
Hello
I think, many people are misled by this communication about the huge, 128k context, expecting the generative content to be way longer.
Please check the documentation, the output tokens are limited to 4096, compared to 2048 with the previous GPT-4 model. So it's just 2x more, not 10x ;)
https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo
Good clarification, thanks. Still, at 10x tokens it is processing 10x more and outputting accordingly, so 2x length may be 5x better "knowledge" since it may reference 5x more facts for input tokens? If 2x output tokens. Just thinking out loud here. I hope it is more accurate so the autodraft based on other articles is able to use more competitors' content for inspiration without plagiarising. I appreciate your willingness to incorporate it and see how it helps, I know it won't be 10x better just hoping the big context window makes a big difference. Thanks for creating the great tool I've been impressed so far!!!
I actually have my old tool generating content and I've cross-tested their "gpt-4-1106-preview" yesterday vs. gpt-4.
The AI detection levels are higher on the new version (and it's important for me) but most importantly the content is not as good at all and pretty similar to what I was getting with gpt 3.5.
Maybe the "stable" version is going to be better :)