A global port operator managing operations across 80+ terminals struggled with escalating support demands for their Zodiac logistics system. INTECH implemented an AI-driven chatbot that streamlined query resolution and reduced manual effort, providing 24×7 technical support. This solution drastically improved response times and operational efficiency, enabling the client to scale support as their business grew.
The client is a leading global logistics company operating over 80 ports and terminals worldwide. Their operations rely on a proprietary platform known as Zodiac. Zodiac manages everything from container tracking and customs clearance to real-time coordination across terminals. As their operations expanded, the need for efficient, scalable support became essential for maintaining smooth, uninterrupted service across their vast network.
Whenever a support query came up, a support agent logged the query, passed it to the development team, and waited for updates. This process was slow and inefficient. Resulted in poor customer support.
Here’s what was going wrong under the surface:
A large share of tickets were related to routine system actions, like status updates, document access issues, or permission issues. Since these queries aren’t auto-resolved, support agents spend valuable time manually investigating each one. This repetitive loop consumed hours every day. As ticket volumes grew, agents spent more time reacting and less time solving new or urgent problems.
Since there was no standardized way to respond to common queries, support quality varied. One agent might give detailed steps, another might share a screenshot, and a third might skip critical context. This inconsistency didn’t just confuse users, it created operational risk.
Each unresolved issue meant idle equipment, delayed movement of cargo, or a waiting truck at the gate. Time-sensitive processes like manifest approvals, truck slot management, or customs uploads couldn’t proceed without timely technical support.
There was no central repository of past tickets, resolutions, or knowledge artifacts. The few internal wikis that existed were outdated or incomplete. Even experienced support agents had to rely on their memory, personal notes, or back-and-forth messages with the development team.
With more terminals coming online and more users relying on Zodiac, the business couldn’t afford to keep support in this shape. They needed a system that could scale support without scaling headcount and ensure consistency, speed, and quality every time a user reached out.
That’s when INTECH stepped in to build a smarter foundation for technical support.
When the client came to INTECH, they wanted more than just automation..
Their support process became chaotic, frustrating both users and helpdesk teams. Most of their time went on repetitive queries, not on actual issues.
INTECH stepped in with a solution built for scale. Here are the key features:
The heart of the chatbot system was a dual-language-model setup combining GPT-3.5 and Llama2. This architecture allowed INTECH to balance two critical needs:
This pairing ensured every answer was not just grammatically fluent but operationally useful.
When a user types a question, they don’t always phrase it the way it is mentioned in support documentation. That’s why INTECH added the FAISS-powered semantic search feature.
This intelligent query processing feature understands the intent behind the query rather than looking for keywords. It compares the user’s input to a deep archive of support documentation, ticket history, and past resolutions. It allows the system to pull the most relevant data to answer the question, even if the phrasing is done differently.
INTECH used vector embedding technology to represent every piece of training content, from resolved tickets to how-to guides, as dense semantic vectors. These vectors acted like memory snapshots that could be rapidly searched using similarity scores.
INTECH designed this chatbot to learn and improve with every interaction. Users could rate responses, flag issues, or provide direct feedback. Whereas support agents could regularly feed new tickets and resolutions into the system.
That’s how, with periodic re-training, the chatbot evolved to handle future queries more effectively, giving the client a dynamic support system.
Now that the foundation was in place, it was time to bring the chatbot system to life.
INTECH approached the implementation with one goal in mind: to ensure the AI system delivered real impact without disrupting existing operations. This meant working closely with the client’s IT, support, and compliance teams to design an implementation process that was both technically sound and operationally smooth.
The full implementation process unfolded in three structured phases:
INTECH started by collecting historical support data like tickets and chat logs. The team cleaned and segmented this data, removing duplicates and irrelevant content. This structured data was added to the vector database, enabling fast and accurate responses.
INTECH integrated the chatbot into the client’s existing system. The chatbot was deployed as a microservice. It worked directly with the database and support ticketing system.
Once deployed, the chatbot’s performance was monitored closely. Users could rate answers, flag errors, and request escalations. Support agents reviewed responses and approved quality answers for reuse. Every two weeks, the system was retrained with real feedback to improve its accuracy and expand its capabilities.
Within the first month itself, the chatbot system incorporated hundreds of feedback loops, making it sharper, faster, and more aligned with real-time user behavior.
The AI-powered support chatbot began shifting how the client’s support team worked. Here’s what changed:
Python: Used as the core programming language to build and integrate the chatbot’s backend services efficiently.
LangChain Framework: Enabled seamless chaining of LLM-driven prompts and responses, allowing the chatbot to interact intelligently with the knowledge base.
Streamlit: Provided a simple, interactive front end for support teams to test, validate, and improve chatbot behavior during rollout.
REST APIs: Connected the chatbot with external systems, allowing secure integration with the client's existing support infrastructure and feedback loops.
Vector Database: Stored semantic embeddings of past support tickets and documentation, enabling fast and context-aware response generation through similarity search.