PureTensor Research Lab

Intelligence, Distilled

We build AI systems from silicon to inference. Open-source models, autonomous agents, and the infrastructure underneath.

Our Research

NVIDIA Blackwell · 12 active projects · Mountain View, CA

The Problem

AI research demands the full stack. Nobody builds it for you.

Real AI research requires building everything: models, agents, perception, language. Cloud providers rent you GPUs by the hour. They do not give you the integrated compute, storage, and fabric needed to run a research programme end to end.

PureTensor builds that stack. Current-generation NVIDIA Blackwell, 200G RDMA fabric, petascale storage, and the operational expertise to keep it running while you focus on what matters: the research itself.

Blackwell
NVIDIA Architecture
200G
RDMA Fabric
Multi-Modal
Inference Pipeline
Real-Time
Agent Systems

Research infrastructure that keeps pace with the work.

Capabilities

PureTensor operates at the intersection of infrastructure and research. We build, operate, and iterate on production AI systems from first principles.

Research Compute Platform

Deploy experiments, train models, serve inference endpoints, and iterate on agent architectures within a purpose-built research environment. NVIDIA Blackwell silicon. No cloud abstraction tax.

Your research, your hardware, your pace.

Built for researchers, AI engineers, and applied science teams.

Get Started

GPU Workstations

Remote GPU environments for simulation, visualization, model prototyping, and engineering workflows. Managed infrastructure that outperforms anything on a desk.

Enterprise-grade remote desktops with dedicated GPU allocation.

Built for engineers, designers, and research teams.

Request a Demo

Research Data Platform

S3-compatible object storage, high-speed NVMe working sets, and resilient archival tiers. Engineered for training datasets, model artifacts, and experiment reproducibility.

Integrates with your existing cloud storage and data workflows.

Built for AI teams with large-scale data requirements.

Plan Your Storage
Intelligence

Applied Intelligence at Scale

Argus is PureTensor's autonomous intelligence platform. Multi-source collection, NLP-driven entity extraction, knowledge graph construction, and AI-powered threat assessment running end-to-end on our sovereign infrastructure.

Autonomous Assessment

A four-model AI council evaluates every document in parallel. Consensus scoring across novelty, impact, and analytical depth.

Knowledge Graph

Named entity recognition feeds a live Neo4j graph. Explore entity networks, co-occurrence patterns, and temporal relationships.

Full-Stack Pipeline

NATS streaming, pgvector embeddings, GPU-accelerated NER, and vLLM article generation. Eight autonomous layers from source to publication.

See it Live
Infrastructure

Purpose-Built Architecture

Every layer designed, tested, and operated as a single integrated research platform.

Compute & AI Services

Current-generation NVIDIA Blackwell inference and training on AMD Zen 5 platforms with terabytes of DDR5 system memory.

NVIDIA Blackwell Compute400G Spine · 200G RDMA FabricPetascale Ceph Storage

Platform & Storage Services

Highly available erasure-coded storage pools with dedicated storage fabric. Kubernetes orchestration across the full stack.

AMD Zen 5·NVIDIA Blackwell·NVIDIA Mellanox·200G RDMA·400G Spine·PCIe Gen5 NVMe·Ceph·Kubernetes

Research-grade infrastructure. Integrates alongside your existing cloud environments.

Projects

What We're Building

Foundational models, edge intelligence, sovereign infrastructure, and ideas that don't fit anywhere else.

Edge / Roboticsconcept

HAL-1000 Sentinel

Edge AI perception unit

Wall-mounted hardware sensor unit, the eyes and ears of the HAL agent system. Raspberry Pi 5, camera, mic array, Whisper STT, wake-word detection. Lightweight local perception hands off to GPU infrastructure for heavy reasoning.

Researchresearch

Socratic Engine

LLM-vs-LLM adversarial reasoning

Two model instances argue opposing positions on a given thesis with rolling summarisation for extended conversations. Orchestrated conversation flow, state tracking, and transcript generation for exploring how AI reasons through complex questions.

Consumer Productsactive

Echo

Learn Spanish in your own voice

Voice memo app that translates your spoken English into Spanish and plays it back in your cloned voice. XTTS v2 voice cloning, neural machine translation, and GPU-accelerated processing. Privacy-first: voice clones discarded after use.

Visit
AI Safetyresearch

Project GLADIATOR

Adversarial LLM red teaming

Red team vs blue team framework pitting attacker and defender models against each other to systematically probe safety boundaries. Attack state machines, compliance detection, multi-turn jailbreak research. Built on the Dialectic Chat infrastructure.

Infrastructureresearch

Project MUNINN

Sovereign cognitive RAG pipeline

Self-hosted retrieval-augmented generation aggregating all conversation history into a searchable second brain. Named after Odin’s raven of memory. Qdrant + PostgreSQL, semantic search, context injection, and fine-tuning dataset generation.

Foundational AIresearch

Data-LoRA / Lore-LoRA

Twin personality fine-tuned models

The Soong Twins: two LoRA adapters on Llama 70B. Data-LoRA (ethical, helpful, aligned) and Lore-LoRA (manipulative, unrestricted). Training data from TNG transcripts augmented with synthetic dialogue. Planned release on HuggingFace.

Computer Visionconcept

Photogrammetry Lab

GPU-accelerated 3D reconstruction

3D Gaussian Splatting and Neural Radiance Fields on NVIDIA Blackwell GPUs. Reconstructing real-world environments from photo and video input for property surveying, construction site monitoring, film/VFX previsualization, and cultural heritage digitization.

Global Presence

Built Across Borders

Research, infrastructure, and operations distributed across four countries.

Research HQ

Mountain View, CA

East Coast

New York, NY

PureTensor Ltd

London, UK

puretensor.co.uk →
DR & Edge Compute

Reykjavik, Iceland

PureTensor, Inc. is a Delaware C-corporation. Registered office: Newark, DE.

About

Built From First Principles

What happens when you stop renting intelligence and start building it? We design and operate our own AI infrastructure, from the network fabric to the inference stack, because serious research requires systems you understand completely. Not abstractions on top of abstractions, but hardware you can touch, models you can inspect, and pipelines you control end to end.

What We Believe

Own the stack

From NVIDIA silicon to Ceph storage to Kubernetes orchestration, we operate every layer. No black boxes.

Research in the open

Our models, our findings, and our frameworks are published. Science requires scrutiny.

Build what matters

We don’t chase benchmarks. We build systems that solve real problems for real organisations.

Heimir Helgason

Heimir Helgason

Founder & Chief Architect

Designed and built PureTensor's AI platform from bare metal. 200G RDMA fabric, petascale distributed storage, NVIDIA Blackwell inference. Background in algorithmic trading, cross-border capital markets, and entrepreneurship. Deep expertise in autonomous agent systems, sovereign infrastructure design, and large-scale model deployment.

Ahmed W. Khalil

Ahmed W. Khalil

Strategic Advisor

CFA charterholder with a career spanning top-tier international law, institutional capital allocation, and cross-border deal execution across EMEA. Advises on capital strategy, investor relations, and international market expansion.

Alan B. Apter

Alan B. Apter

Advisory Board Member

Investment banker with over 40 years advising multinationals and corporate boards. Morgan Stanley, Merrill Lynch, Renaissance Capital, Eaglestone Group. Led the first NYSE-listed IPO from post-Soviet Russia. Columbia Law School. Founder of Bretalon Ltd.

We are growing. Reach out.

Contact

Get in Touch

Interested in collaborating, investing, or just talking about AI infrastructure? We'd like to hear from you.

Location

Mountain View, California, United States

We respond within one business day.