aimeetm FAQ
Is aimee™ currently deployed in hospital settings, or is this a demo?
Many of aimee’s core capabilities are already deployed in hospitals, while the fully integrated experience shown here is in the final validation phase before production rollout. Deployed components include vision models that interpret in-room activity, real-time sensing of specific events, automated triggers, and memory functions. While the part of the iceberg under water is much bigger, we understand that the communication layer is what brings it to life! Direct voice-based engagement with patients and staff is ready and will be deployed this fall with one of our hospital partners. The capabilities shown and available here will continue to be ahead of our deployments; it becomes a way to share what aimee can do and pre-validate its behavior as expected in a much lower risk situation. This site is our showcase and test kitchen for aimee.
How “real-time” is aimee’s perception and response?
You can test the conversational side yourself through the online demo—available to any hospital, partners and approved third parties. You can speak, get responses, interrupt and have a natural interaction. We have a long track record of computer vision AI research and peer‑reviewed presentations—along with the first ever peer‑reviewed computer vision for patient monitoring in real hospitals ever published by any company. Some of our capabilities are constant and operating all the time; in other cases, we activate larger models for specific, shorter periods of time.
How do you ensure patient data privacy and HIPAA compliance when you’re processing video and audio on site?
We are SOC2 Type II certified with third‑party penetration testing and have deep experience with cloud security at the highest levels. See our security page for more details on our leadership in these areas. That foundation is critical to ensuring aimee is secure and follows the same principles. We isolate the data aimee has access to, so it is given its context and can only add to it in specific ways and control to the patient it has access to is limited to the one it is talking to right now. The hospital always owns and configures what aimee can access or store.
Are you also storing the conversation on this site? What other differences are there?
The public version of aimee is the same at its core. Key differences are:
- aimee is allowed to talk about how she works.
- No computer vision, aimee is not watching you.
- Advances features like selective memory are not enabled.
- Conversations are limited to ~10 minutes.
- Certain test features, model variance and other new capabilities are intermittently placed in the system for testing.
We are storing transcripts of all conversations and using them for analysis, evaluation, and further refinement of the system.
Are the scenarios I hear on the site real patients or people?
They are a combination of real people and patients, volunteers and AI. The conversations were real exchanges with aimee and then rerecorded by volunteers. There are no actual patient conversations exposed.
What integrations does aimee support today (EHRs, nurse-call, RTLS, paging, etc.)—and how long does each take?
We support integration directly and indirectly to EHRs (through integrations we have built and partners), nurse call systems, and many medical devices. We also have partnerships that make several dozen integrations available to us. We can integrate with the lights in a room, patient education systems, and more.
Because we leverage APIs and Model Context Protocol (MCP), typical on‑site integration efforts take days to a few weeks—far shorter than traditional health IT projects.
What uptime/SLA guarantees do you offer, and how do you architect for fail-safe behavior if connectivity drops?
We offer a 99.9% uptime SLA and have consistently exceeded that. We have had less than 5 minutes of unscheduled downtime in our systems the past several years.
What happens if aimee misinterprets something—who’s liable and how do you handle errors?
Every action aimee takes is logged with a timestamp, context, and confidence score. If there’s ever a question about a decision, hospitals can audit the exact sequence and rationale. We operate on a safe‑fail principle: if confidence is below threshold or the task approaches a clinical decision, aimee defers or escalates to a human clinician. At present, we review every interaction—combining automated checks with human oversight—to protect patients, hospital partners, and LookDeep. As health systems gain familiarity and independent studies validate performance, we will reduce retained data in line with privacy requirements. The hospital is responsible for patient care decisions, while LookDeep is responsible for ensuring the integrity, security, and performance of the technology. This division mirrors existing clinical technology contracts, aligning accountability with each party’s role.
How customizable is aimee for specific workflows or local protocols? Do hospitals need in-house engineering?
aimee ships with a complete system. We provide mechanisms to enhance that with hospital content (e.g., their menus, hospital policies, values statements, etc). In addition, partners and hospitals can choose to expand or build on that in deeper ways. For example, our partner Nexus Bedside has clinical extensions to aimee to help in domain‑specific areas like oncology or neurology floors.
We also have a full API that enables access to our system and the ability to allow aimee to manage and engage with third‑party content and scenarios. Our Partner Program provides MCP interfaces and the ability to securely connect to MCPs. In addition, we allow IT or system integrators to enhance the context of aimee, integrate new device “tools,” or adjust memory/extraction rules without extensive engineering. Access to the Partner Program is for existing customers or pre‑selected partners, with plans for the program to become publicly available in 2026.
What does aimee cost?
Because aimee’s capabilities are new to hospitals, we’re offering promotional pricing for the next 6–12 months so early adopters can see the impact in their own workflows and realize outsized benefits. This approach reduces risk for hospitals while accelerating adoption of technology that can meaningfully improve patient safety and staff efficiency. LookDeep is unique in the market in our transparency of pricing, and we continue that with aimee.
How does aimee collaborate with or complement existing AI or “agent” tools hospitals already use
aimee uses the MCP to connect with other AI agents, chatbots, and robotic systems. This means she can both share and receive contextual data—whether from a virtual scribe, a pharmacy‑ordering bot, or a hospital logistics system—and coordinate multi‑step workflows across them. The result is a federated AI ecosystem instead of isolated point solutions. We only expect this to expand further with MCP or other A2A (Agent to Agent) technologies. In practice, that could mean aimee arranging medical equipment for home, scheduling follow‑up appointments, or initiating insurance steps—often before a patient has even left the hospital—without requiring them to pick up a device.
What regulatory approvals or clearances does aimee™ have (e.g., FDA, CE) — and what’s on the roadmap?
aimee’s core perception and alerting functions today operate under LookDeep’s existing medical‑software quality management system and meet FDA guidance for non‑device clinical decision support—namely that while regulatable, it has not done so to date. We have published peer‑reviewed studies validating performance in live hospital environments, but have not yet sought formal FDA or CE clearance. As the technology matures, we are evaluating pursuing FDA clearance for certain capabilities to further differentiate aimee from video‑based AI products that make clinical claims without public evidence or independent validation.
What hardware footprint is required per room, and can it reuse existing infrastructure?
Each aimee‑enabled room requires four core elements:
- A standard network‑connected camera (fixed or PTZ) with a 90°+ field of view
- A microphone array or high‑quality boundary mic and a speaker for voice interaction
- A small‑form AI appliance (or virtual machine) running our edge‑vision engine and additional protocols. This is the only component that must come from LookDeep and is provided without any capital cost.
Because we rely on APIs, MCP, and standard networking, you can repurpose existing hospital cameras, mics, and speakers in most cases. LookDeep can also supply a turnkey all‑in‑one device for greenfield deployments. Core hardware is provided under our hardware‑as‑a‑service model with no capital outlay required.
Ultimately we believe these core components in most scenarios should be common to the hospital's room infrastructure and not provided by a specific vendor. Open platforms will force all vendors to compete on innovation rather than relying on last‑mile lock‑in.
What’s your approach to bias mitigation and ensuring equitable performance across diverse patient populations?
- Diverse training data: Our vision models are trained on multi‑site, multi‑ethnic video datasets—including skin tones, body types, mobility levels, and room‑layout variations—validated in peer‑reviewed publications.
- Performance monitoring: Real‑time dashboards surface key metrics regarding the AI models (e.g., detection accuracy by demographic segment), with automated alerts if performance drifts.
- Continuous auditing: Continuous auditing is critical. Even with strong overall performance, we occasionally find narrow situations where results can be improved—such as unusual lighting or potentially, rare combinations of demographic factors. Identifying and addressing these edge cases is the most effective way to ensure our models perform reliably and without bias, in every sense of the word.
Finally, we have and plan to continue to publish technical and clinical research on AI in the hospital. We are undergoing a massive societal shift in how we interface with AI and hospitals are an important part of this inflection that should be researched and shared for our collective learning and advancement.
How does aimee handle multilingual or low-literacy environments?
aimee’s language layer leverages multiple foundation models fine‑tuned for ~50 languages. It automatically detects the speaker’s language and seamlessly switches contexts.
aimee can adjust to children and lower literacy levels. Hospitals are inherently challenging to navigate—complicated by medications, delirium, and other factors—so tailoring interaction to each patient’s needs is a core element of performance.
We can also integrate with televisions in the room to dynamically show hospital‑approved content to reinforce discussions through visuals.
Are you concerned Epic will just build aimee, as they are doing for AI scribing?
We welcome competition from major EHR players—it validates the transformation toward ambient, agentic AI in healthcare. But aimee’s value isn’t just in AI running in the cloud; it comes from combining advanced perception at the bedside with real‑time context, direct patient interaction, and seamless integration into hospital workflows. That’s a very different challenge than adding a feature for data entry into an EHR, and it’s why we believe our approach will continue to stand apart. Our core differentiators include:
- Live, multimodal perception: On‑prem vision and audio models that understand room activity in real time—far beyond chart text or voice alone.
- Hardware + software integration: Deep integration with cameras, microphones, and edge appliances—something that a pure‑software scribing solution can’t replicate.
- Open agent interoperability (MCP): Aimee speaks the Model Context Protocol, enabling rich, bidirectional workflows across EHRs, IoT devices, and third‑party agents.
- Memory & extraction pipeline: We selectively surface human‑centric context and critical clinical flags, rather than log every word.
Building a true visual agent requires not just fluent LLMs, but “valley‑grade” engineering—edge AI, appliance management, device partnerships, and standards work. While scribing costs have plummeted, creating reliable, perceptive agents in hospital environments demands an equally dramatic leap in hardware/software integration.
We believe success in this space requires deep integration and true interoperability. Any company that combines technology expertise with hardware support and open standards would be a welcome addition—pushing the market toward AI innovation and patient outcomes as the real measures of winning.
Table of Contents
- Is aimee™ currently deployed in hospital settings, or is this a demo?
- How “real-time” is aimee’s perception and response?
- How do you ensure patient data privacy and HIPAA compliance when you’re processing video and audio on site?
- Are you also storing the conversation on this site? What other differences are there?
- Are the scenarios I hear on the site real patients or people?
- What integrations does aimee support today (EHRs, nurse-call, RTLS, paging, etc.)—and how long does each take?
- What uptime/SLA guarantees do you offer, and how do you architect for fail-safe behavior if connectivity drops?
- What happens if aimee misinterprets something—who’s liable and how do you handle errors?
- How customizable is aimee for specific workflows or local protocols? Do hospitals need in-house engineering?
- What does aimee cost?
- How does aimee collaborate with or complement existing AI or “agent” tools hospitals already use
- What regulatory approvals or clearances does aimee™ have (e.g., FDA, CE) — and what’s on the roadmap?
- What hardware footprint is required per room, and can it reuse existing infrastructure?
- What’s your approach to bias mitigation and ensuring equitable performance across diverse patient populations?
- How does aimee handle multilingual or low-literacy environments?
- Are you concerned Epic will just build aimee, as they are doing for AI scribing?