How Think360.ai CEO Amit Das is Using 'Decision Intelligence' to Boost Approval Rates by 30%
1. Think360.ai operates at the intersection of data and decision-making. Could you walk us through the most complex AI and data analytics challenges your team tackles on a daily basis, and how these solutions translate into real-world ROI for your clients?
Across BFSI, data is usually fragmented across legacy cores, bureau systems, alternate data providers, and consent-driven frameworks like the Account Aggregator ecosystem. Our challenge is to build decision intelligence systems that remain robust despite incomplete signals, shifting borrower behavior, regulatory evolution, and operational scale.
We work extensively on alternate data underwriting and risk stratification, consent-based financial data ingestion under AA frameworks, early warning systems and anomaly detection in digital lending, and CKYC, eKYC, and high-volume identity verification pipelines.
The real difficulty is not building a model with a high AUC in a sandbox. It ensures stability across time windows, segment-level predictability, controlled exposure to edge cases, regulatory defensibility, and audit-ready explainability.
When we deploy alternate data underwriting systems, institutions typically observe 25 to 30 percent uplift in approval rates without adverse risk expansion, 30 to 50 percent reduction in underwriting turnaround time, measurable reduction in early-stage delinquencies, and lower manual review overhead.
In the KYC at PSU bank scale, the challenge becomes architectural. When millions of KYC sessions are processed annually, even a 1 percent exception rate translates into thousands of manual interventions daily. Our identity workflows are built for million scale concurrency, structured validation, and real-time anomaly flags, compressing onboarding cycles from minutes to seconds while preserving compliance traceability.
Our position is simple. AI must translate into quantifiable risk-adjusted ROI. Otherwise, it is experimentation, not infrastructure.
2. Deploying a model is only half the battle. How does Think360.ai approach MLOps—specifically, how do you monitor model performance in production, and what are your 'gold standards' for scrutinizing a model before it is deemed fit for a live environment?
You’re right in saying that deployment is the midpoint. In regulated sectors, production governance determines long-term success. At Think360.ai, we treat MLOps as a control architecture, not a DevOps extension.
Before production release, a model must clear temporal stability testing through multi-window backtesting, segment-wise performance decomposition, bias and fairness evaluation, stress testing under adverse and sparse data conditions, feature leakage validation, explainability scoring through interpretability checks, and shadow deployment in controlled live environments.
We do not promote models based on static accuracy metrics. We evaluate resilience. In production, we continuously monitor data drift, concept drift, feature integrity, pipeline consistency, threshold stability, and performance decay across cohorts. Guardrails are predefined. If deviations exceed the usual tolerance bands, automated alerts trigger threshold recalibration, segment level recalibration, along with a controlled retraining, or rollback to a stable version. In high-impact workflows, we embed human override mechanisms to preserve decision accountability.
Our gold standard is clear in the sense that a model is production-ready only when it demonstrates predictability, interpretability, and resilience under volatility, not just performance in a test dataset.
3. When a new product is launched, how do you measure its success in terms of both customer-centric growth and internal company scaling? Additionally, what structured learning opportunities are in place to ensure your team’s skills evolve as fast as the AI landscape itself?
Product success is measured along two axes, client impact and architectural repeatability. On the client side, we track adoption velocity, usage depth across portfolios, business KPI movement such as approval quality, delinquency trends, turnaround time reduction, compliance efficiency, and portfolio-level expansion. If a solution expands organically within the same institution, that is validation.
Internally, we assess deployment cycle compression across implementations, integration modularity, reusability of core components, and reduction in customization intensity. True product maturity means each deployment becomes faster, cleaner, and less bespoke.
On capability evolution, we institutionalize learning through structured R&D sprints, cross-functional pods combining risk, domain, and AI engineering, controlled sandbox experimentation before enterprise exposure, and periodic architecture reviews aligned with evolving AI frameworks. In AI, learning cannot be episodic. It must be embedded into the operating rhythm.
4. With the rapid shift toward Generative AI and autonomous agents, what is the long-term vision for Think360.ai? How are you positioning the company to lead in a future where technology is no longer just a tool, but a core driver of business logic?
Our long term vision is to embed AI directly into enterprise decision logic, not as an overlay, but as infrastructure. Over the past year, we executed 37 AI and Generative AI engagements across industries using a production conversion framework. The insight is consistent. Pilots create noise, governed deployments create value.
Today, 60 to 70 percent of our projected revenue is AI influenced. This reflects structural integration, not marketing positioning. We have built an in house self serve AI platform designed for controlled experimentation, standardized monitoring, model lifecycle governance, and secure enterprise deployment.
In the generative and agentic era, the differentiator will not be creativity. It will be accountability. We are focused on building systems that are explainable, bias aware, policy aligned, consent aware, and designed with embedded human override. Particularly in BFSI, autonomous workflows must operate within defined compliance boundaries. The future belongs to governed intelligence, not unsupervised automation.
5. The AI and analytics space is incredibly crowded. What is the 'Think360.ai edge' that differentiates you from other players? From your perspective as CEO, what specific goal-oriented mindsets do you look for in your leadership team to maintain this advantage?
Yes, the AI space is crowded. But most players optimize for velocity and visibility. We optimize for structural reliability. Our edge lies at the intersection of data engineering depth, regulatory fluency, production grade architecture, and risk aware AI deployment. We operate in environments where errors carry financial and compliance consequences. That demands discipline.
As CEO, I prioritize leaders who think in terms of risk adjusted growth, build scalable systems and not one off PoCs, treat governance as a growth enabler, measure success in client trust and repeatability, and separate signal from hype. We have never tried to compete by being louder. We compete by being structurally better in design, deployment, and accountability.
6. Reliability is a major concern in AI. How does your architectural framework handle data integrity issues or 'edge cases' when they occur in production? What specific design philosophies make your products more resilient and efficient than off-the-shelf solutions?
Reliability is not theoretical. Most production ML systems degrade without active monitoring. Our architectural philosophy is built on layered validation.
At the input level, we enforce schema validation, anomaly detection, and missing value thresholds. At the feature level, we run drift checks, distributional alerts, and leakage detection, along with pipeline integrity monitoring. At the decision level, we use exposure caps, rule based overrides, and threshold bands. We maintain full audit trails for regulatory review. We also implement fallback logic through controlled reversion paths when instability is detected.
We architect modularly by separating data pipelines, feature stores, scoring engines, and decision layers. This isolation ensures localized issue containment without systemic failure. Off the shelf systems often assume stable data environments. We design for volatility, compliance audits, and real world noise. Resilience, in our view, is not redundancy. It is anticipatory architecture.
.jpg&w=1920&q=75)