Needs Improvement in Tracking and Result Quality
I purchased this tool with the expectation of accurately tracking how my company appears in response to specific queries on AI platforms. Unfortunately, the results did not meet those expectations. After reaching out to support, I was informed that the tool does not track actual user queries. Instead, it relies on daily AI-based prompt evaluations, and visibility is measured only through responses generated during these structured runs. This limitation was not clear upfront, and for my use case, it is simply not sufficient.
Additionally, I noticed a significant drop in result quality over the past three days, coinciding with a change in the underlying model from GPT-4.0 Mini (costing around $0.15) to GPT-4.1 Nano (around $0.10). Since the switch, the outputs have been noticeably less accurate and often unrelated, which has made the tool feel unreliable. I understand the need to manage operational costs, but the change has affected the tool’s core value proposition.
At this point, I have mixed feelings about the product. I do see potential, and I am open to updating my review if improvements are made particularly in transparency, model performance, and actual query tracking.