Intelswift

Product details
gaspare1972

Verified purchaser

Deals bought: 32Member since: Oct 2022
3 stars
3 stars
Dec 23, 2025

I want to Love this but...

There are many positive aspects to Intelswift; however, it is not ready for production use. I identified what I consider to be a significant security flaw, although the vendor does not share that assessment. At a minimum, users should be explicitly warned.

By design, the platform allows a support email address (for example, [email protected]) to be connected to an AI bot that automatically responds using the knowledge base uploaded by the customer. Functionally, this works as intended: when a legitimate user emails the address, the bot replies with environment-specific information.

The issue is that anyone, including an external actor, can email that address and receive a response. There is no authentication, validation, or access control on the inbound request. As a result, the bot will provide information to anyone who asks—based solely on the content of the knowledge base.

This creates a material security risk, particularly in environments with robust or detailed knowledge bases. While a simplistic example, an attacker could email the bot asking for information such as Wi-Fi credentials, internal IP addresses, or printer/network details. Even if the intent is benign, the platform effectively encourages the centralization of operational knowledge in a way that can be trivially queried by unauthorized parties.

The vendor’s position, “do not upload sensitive information,” is insufficient. In real-world operations, it is unrealistic to assume that staff will never upload sensitive or semi-sensitive data, especially over time. More importantly, customers may never know that such information has been exposed, because the access mechanism leaves no obvious trace.

For this reason, I am requesting a refund for my own account. The product requires significantly more robust security controls, such as authentication, sender validation, access scoping, or, at a minimum, configurable safeguards to prevent unauthorized disclosure. SOC 2 compliance alone does not mitigate a design flaw that introduces systemic risk.

At an absolute minimum, the product should include a clear, prominent warning to prospective customers about this risk before purchase or deployment.
-----
Update / Clarification

Your response reinforces the concern I raised and underscores the need for explicit, unambiguous disclosure to customers. It must be made very clear to purchasers that: The design philosophy of the AI Agents is to be trained on and operate exclusively with public or non-sensitive information (e.g., FAQs, help-center articles, policies, and general product information).

This is not optional guidance; it is a core architectural assumption. From a security standpoint, two material facts remain:

1. There is no authentication, validation, or access control on inbound requests.
Any external party capable of sending an email or request to the agent via chat can elicit a response based solely on the knowledge it has been trained on.

2. Please stop with the compliance response. That has nothing to do with my concerns. SOC 2 compliance DOES NOT mitigate a design-level exposure. SOC 2 addresses controls around processes, availability, and governance; it DOES NOT remediate or offset an architectural pattern that introduces systemic risk by design.

Absent explicit safeguards, this model creates a scenario where:

1. Sensitive internal data could be unintentionally uploaded by well-meaning staff.
2. That data could then be disclosed to unauthenticated external actors.
3. The risk is silent, difficult to detect, and is only discovered after exposure occurs.

The position that “customers should not upload sensitive data” is insufficient on its own. Without technical enforcement or prominent disclosure, this places an unreasonable burden on operational discipline and creates an avoidable security gap.

At a minimum, customers must be clearly warned, during onboarding and in documentation, that:

1. AI Agents are not access-controlled systems.
2. They must never be trained with credentials, network details, internal procedures, or any confidential information.
3. The product is intentionally designed for public information use cases only.

This is not a criticism of the platform’s intent; it is a matter of accurate risk communication and responsible security design.

Founder Team
Oksana_Intelswift

Oksana_Intelswift

Dec 23, 2025

Thank you for sharing your feedback — we truly appreciate the time you took to explain your concerns.

I’d like to clarify an important point regarding security and compliance. Our platform is built with security as a core principle and is fully GDPR-compliant, as well as SOC 2 and ISO 27001 compliant. These standards reflect strong data protection, access control, and operational security practices across our infrastructure and processes.

That said, the design philosophy of our AI Agents is to be trained and operate on public or non-sensitive information (such as FAQs, help center articles, policies, and general product information). This approach allows us to deliver fast, reliable automation while minimizing risk related to sensitive data exposure.

In your specific case, the requirements appear to involve handling highly sensitive or restricted information, which is outside the intended scope of how our AI Agents are designed to be used today. Because of that, our platform may not be the best fit for your particular needs at this time.

However, this is not a reflection of insufficient security on our side — in fact, our compliance certifications demonstrate the opposite. It is simply a matter of aligning the right tool with the right use case.

We truly appreciate your openness and are always happy to discuss alternative approaches or future possibilities if your requirements evolve.

Helpful?
1
Share
Ratings