Q: Not to be a bummer or negative.

But I used your Store Clothing reviews https://hazlo.ai/e/4lB9ChGA to test clothing store reviews and your system prediction accuracy in real world. I went to https://www.google.com/search?tbs=lf:1,lf_ui:10&tbm=lcl&sxsrf=ALeKk01D4PJmA2014Pn6w9f1z8ASkLAC_A:1626888848068&q=womens+clothing+reviews&rflfq=1&num=10&sa=X&ved=2ahUKEwj_y4Kf2fTxAhXKB80KHQ-aCOYQjGp6BAgYEFM&biw=1868&bih=1184#lrd=0x89d5033fffda07e1:0x38ff752e699963f0,1,,,&rlfi=hd:;si:4107130227385787376,a;mv:[[43.944950399999996,-78.840828],[43.8814779,-78.9440689]]

And copy and pasted reviews from there into https://hazlo.ai/e/4lB9ChGA . Sorry to say that the accuracy of the predictions where way off! Try it out your self on this obvious 1 star review:

Don't bother taking your clothes here unless they are upscale brand name items still in style. Just donate them. You'll feel good and help someone. The few dollars you might get aren't worth it.

Thought I would just post this so that you might be able to 'defend' why the results as so bad?

gevendenJul 21, 2021
Founder Team
Ojas_Hazlo

Ojas_Hazlo

May 15, 2024

A: Hey,

Thanks a lot for reaching out & I completely understand the concern :) However, Hazlo utilises only state-of-the-art libraries to build your models — from sklearn to keras. These are standard algorithmic structures used by almost every entity that relies on machine or deep learning. Keeping aside some incremental optimization rates, you can be assured that the models you build on Hazlo are second to none.

Therefore, when it comes down to results, a major contributor is the quality & quantity of data fed into the system. The project I built here was just on something I had at hand at the moment, a small sample dataset of pure language with <25k rows, & actually trained under 20 mins — just to be a proof-of-concept. An added benefit that it's an easy use-case to grasp. On the contrary, it was a language-learning model with only one column which require incredible amount of time, infrastructure & data to be of some value.

As you can imagine, to get an algorithm to completely understand the nuances of the English language would require significantly larger resources. For instance, GPT-3, one of the largest NNs trained, required over 175 billion parameters, exabytes of data & years of research of OpenAI to get it to a level where it can comprehend the language to some acceptable standard (https://en.wikipedia.org/wiki/GPT-3).

Hazlo is meant to help individuals & SMEs who'd like to use A-grade models on relatively small — mostly numeric or categorical — datasets, quickly. Hope I was able to address the concern & please let me know if I missed anything :)

Best,
Ojas from Hazlo

Helpful?
Log in to join the conversation