Skip to content

FAQs

Frequently Asked Questions

Get the fundamental answers you need to understand NeuralShield's mission, technology, and value. If you're new to AI assurance, this is the perfect place to start.

What is NeuralShield? NeuralShield is an AI assurance platform that detects, prevents, and mitigates AI’s unpredictable behaviours. We provide comprehensive AI governance and protection across your users, models, and organisation.
How does NeuralShield work? Our platform operates through four core pillars: Guardrails (policy enforcement), Evaluations (real-time quality and ethics checks for bias/hallucination/toxicity), Protection (inline LLM Proxy and threat defence), and Reporting (audit logs and risk telemetry).
What kind of AI systems does NeuralShield work with? NeuralShield is designed to integrate with popular Large Language Models (LLMs) and AI platforms, including third-party tools like ChatGPT, open-source LLMs, and custom models deployed on-premises or in the cloud.
What are the pricing tiers? We offer three flexible tiers: Freemium for individuals and small teams to get started, Pro for growing teams needing advanced features, and Enterprise for organisations requiring full EU AI Act compliance, self-hosting, and insurance integration. Contact us for more pricing details.
How does NeuralShield help us meet the EU AI Act or other regulations? NeuralShield is designed for regulatory readiness. It automatically generates detailed, traceable compliance audit trails and offers real-time AI observability for every interaction, which is essential evidence for compliance audits. Our Enterprise tier is specifically built for EU AI Act readiness.
Can NeuralShield prevent employees from violating our internal policies? Yes. Our User Risk controls and AI Guardrails enforce organisational policies in real-time. For example, it can automatically detect and block a user's prompt from exposing confidential data or IP before it reaches the AI model.
What kind of records does the platform keep?

The platform keeps a detailed, unalterable audit log for accountability and forensic review. This log consists of governance metadata (e.g., policy violations detected, intervention action taken like block or redaction, timestamp, and rule triggered). By design, the actual prompt and AI response text are not stored or retained by default. However, data privacy settings are configurable, allowing the customer administrator to optionally enable limited content retention for internal audit or model accuracy purposes. For complete details on data processing, please refer to our Privacy Policy.

How can we allow AI use in Higher Education classrooms without risking academic cheating?

NeuralShield acts as a smart proctor via our AI Chat Guardrails. Institutions can monitor AI tool usage or, for high-stakes exams, block access to specific AI tools, preserving a fair testing environment. 

Note: Our services are designed for users aged 18+ and are optimised for University and College environments.

How do teachers gain visibility into student AI use? The platform generates custom reports that show teachers which students accessed AI, when they did so, and what queries were asked. This provides transparency to uphold academic integrity without punitive surveillance.
How do you ensure student data privacy when using AI tools? NeuralShield automatically redacts Personal Identifiable Information (PII) and confidential data within prompts and responses. This safeguards student data and aligns with academic privacy policies, ensuring no sensitive information is leaked to third-party models.
How does NeuralShield help us quantify AI risk for underwriting new policies?     Our Risk Matrix provides Real-Time Risk Telemetry—deep, continuous data that scores AI behavioural risks (like prompt injection likelihood or biased output) with severity scores. This moves AI risk from guesswork to a quantifiable metric.
What is Dynamic AI Risk Scoring? It’s the ability to use the continuous risk data from NeuralShield’s Evaluations pillar to generate a clear risk profile for an AI system. This data enables insurers to perform dynamic underwriting, adjusting coverage and premiums based on the actual, current risk of the deployed AI.
Does NeuralShield offer a way to lower the chance of claims? Yes. By deploying the Guardrails and Protection (LLM Proxy) pillars, NeuralShield actively prevents high-risk incidents. This proactive defence drastically mitigates potential losses, which is a major benefit for both the insurer (lower claims) and the insured company (potential for lower premiums).
How does NeuralShield prevent data leakage via AI prompts? Our configurable PII/Secret Detectors provide Real-Time Data Leakage Protection. They are deployed inline to instantly intercept and redact sensitive information (PII, corporate IP) from prompts or AI outputs, ensuring it never reaches the external LLM provider.
Can NeuralShield stop prompt injection attacks? Yes. Our Natural Language Evaluators provide Advanced Prompt Injection Defense. They are designed to recognise and flag malicious intent in user prompts, preventing bad actors from tricking the AI into executing unauthorised actions or revealing system information.
What is the LLM Proxy and why is it important for security? The LLM Proxy (AI Gateway) acts as a security checkpoint, or "Inline AI Firewall." It sits between your application and the LLM to vet every request and response against security and compliance policies, enforcing a zero-trust security model for your AI interactions.
Do we have to use your cloud service for the security features? No. Our Enterprise tier offers the option for Maximum Data Control with Self-Hosting, allowing you to deploy the entire policy engine, LLM Proxy, and evaluators within your own firewall for maximum control over sensitive data.

Contact us

Navigating EU AI Act compliance, self-hosted security, or AI insurance?

Our Enterprise team is here to help. Contact us to schedule a custom consultation and personalized deployment plan. 

General Enquiries

Phone:      +44 20 3451 7791
Email:       info@neuralshield.ai
Web:         www.neuralshield.ai