Thox.ai - HuggingFace Organization Profile

Organization: Thox-ai

Profile URL: https://huggingface.co/Thox-ai Website: https://www.thox.ai GitHub: https://github.com/Thox-ai


About Thox.ai LLC

Thox.ai develops edge AI computing devices and optimized language models for local, privacy-preserving AI assistance across all industries. Our mission is to bring powerful AI capabilities directly to professionals' desktops without cloud dependencies, ensuring complete data sovereignty.

Our Focus


Organization Status

Current Status (as of December 28, 2025):

Organization Type

Company - Thox.ai LLC


Industries We Serve

Thox.ai powers AI workflows for professionals across diverse sectors:


Upcoming Models (In Development)

We're currently preparing several high-quality models for release, optimized for diverse professional use cases. All models prioritize privacy, regulatory compliance, and edge deployment.

Privacy-First Professional Models (RECOMMENDED)

Our flagship models serve professionals across all industries with complete data sovereignty:

Model Status Size License Description
thox-pro In Development 7B MIT General professional AI for all industries
thox-pro-advanced In Development 14B MIT Advanced professional tasks and detailed analysis
thox-pro-max In Development 32B MIT Enterprise-grade maximum capability

Key Capabilities:

Developer-Focused Models (Legacy)

Specialized models for software development teams:

Model Status Industry Focus Compliance
thox-coder-7b In Development Software Development IP Protected
thox-coder-14b In Development Software Development IP Protected
thox-coder-32b In Development Software Development IP Protected
thox-review In Development Code Review Security-First
thox-debug In Development Debugging Privacy-Preserved
thox-assistant In Development General Development Local-Only

MagStack™ Cluster Models

Distributed AI for enterprise teams (Coming 2026):

Model Status Size Deployment Purpose
thox-cluster-nano Planned 30B 2-4 devices Fast team collaboration
thox-cluster-70b Planned 70B 4-8 devices Department-level AI
thox-cluster-100b Planned 100B 8-12 devices Organization-wide intelligence
thox-cluster-200b Planned 200B 12+ devices Enterprise AI infrastructure

Infrastructure Models

Model Status Purpose Use Case
thox-cluster-coordinator In Development Multi-model orchestration Load balancing and routing
thox-cluster-devstral In Development Developer tooling Model fine-tuning and optimization

MagStack™ Technology

Revolutionary Distributed AI Architecture

MagStack™ is Thox.ai's proprietary clustering technology that connects multiple Thox.ai edge devices into a unified, high-performance AI system. Perfect for organizations requiring enterprise-grade AI capabilities while maintaining complete data sovereignty.

How MagStack Works

MagStack creates a distributed inference network by connecting 2-20+ Thox.ai devices:

  1. Device Clustering: Connect multiple Thox.ai edge devices via high-speed network
  2. Model Distribution: Large models are distributed across the cluster
  3. Coordinated Inference: The coordinator model orchestrates requests across devices
  4. Parallel Processing: Workloads are processed simultaneously across all devices
  5. Local Network Only: All communication stays within your private network

MagStack Deployment Configurations

Configuration Devices Total Parameters Use Case Typical Users
MagStack Nano 2-4 30B Small teams (5-15 people) Startups, small departments
MagStack Standard 4-8 70B Departments (15-50 people) Mid-size teams, divisions
MagStack Professional 8-12 100B Organizations (50-200 people) Large departments, companies
MagStack Enterprise 12-20+ 200B+ Enterprises (200+ people) Fortune 500, universities, hospitals

MagStack Benefits

For Organizations:

For IT Teams:

For End Users:

MagStack vs. Cloud AI

Feature MagStack™ Cloud AI
Data Location 100% on-premises Third-party servers
Privacy Complete sovereignty Shared infrastructure
Compliance Full control (HIPAA, GDPR, SOC2) Depends on provider
Cost Model One-time hardware investment Ongoing subscription + usage fees
Internet Required No Yes
Air-Gap Compatible Yes No
Latency Sub-50ms (LAN speed) 100-500ms+ (WAN speed)
Data Transfer None All data sent to cloud
Vendor Lock-in None High

MagStack Model Performance

Expected performance for MagStack cluster models:

Model Cluster Size Tokens/Second Concurrent Users Context Window
thox-cluster-nano (30B) 2-4 devices 80-120 5-15 32K
thox-cluster-70b (70B) 4-8 devices 60-90 15-50 64K
thox-cluster-100b (100B) 8-12 devices 45-70 50-200 128K
thox-cluster-200b (200B) 12-20 devices 30-50 200+ 256K

MagStack Setup Requirements

Hardware Requirements (Per Device):

Network Requirements:

Software Requirements:

MagStack Licensing


Key Features Across All Models

Privacy & Compliance

Performance

Versatility & Scalability


Common Use Cases

For All Professionals

Industry-Specific Applications


Planned Model Formats

Once published, all models will be available in multiple formats:

Planned Quantization Options

Format Size Reduction Quality Loss Use Case
FP16 50% None Maximum quality
INT8 75% Minimal Balanced
INT4 (Q4_K_M) 87.5% Low Speed priority
INT4 (Q4_0) 87.5% Medium Maximum speed

Usage Examples (Coming Soon)

With Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Thox-ai/thox-coder-7b"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

prompt = "Write a Python function to validate email addresses"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=500)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

API Access (Available Upon Release)

HuggingFace Inference API

curl https://api-inference.huggingface.co/models/Thox-ai/thox-coder-7b \
  -H "Authorization: Bearer $HF_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"inputs": "def fibonacci(n):"}'

Python Client

from huggingface_hub import InferenceClient

client = InferenceClient(token="$HF_TOKEN")
response = client.text_generation(
    prompt="Explain Python decorators",
    model="Thox-ai/thox-coder-7b",
    max_new_tokens=500
)

Environment Setup

# Set HuggingFace token
export HF_TOKEN="your_token_here"

# Install dependencies
pip install transformers accelerate huggingface_hub

# Login (optional, for private models)
huggingface-cli login

Hardware Requirements & Deployment Options

Single Device Deployment

Recommended for: Individual professionals, small teams (1-5 users)

Model VRAM (FP16) VRAM (INT8) VRAM (INT4) Concurrent Users
thox-pro (7B) 14GB 8GB 5GB 1-3
thox-pro-advanced (14B) 28GB 16GB 10GB 1-2
thox-pro-max (32B) 64GB 35GB 20GB 1-2
thox-coder-7b 14GB 8GB 5GB 1-3
thox-coder-14b 28GB 16GB 10GB 1-2
thox-coder-32b 64GB 35GB 20GB 1-2

Single Device Specifications:

MagStack Cluster Deployment

Recommended for: Teams, departments, enterprises (5-200+ users)

Model Cluster Size Total VRAM Devices Required Concurrent Users
thox-cluster-nano (30B) 2-4 devices 60-120GB 2-4 Thox.ai devices 5-15
thox-cluster-70b (70B) 4-8 devices 140-280GB 4-8 Thox.ai devices 15-50
thox-cluster-100b (100B) 8-12 devices 280-420GB 8-12 Thox.ai devices 50-200
thox-cluster-200b (200B) 12-20+ devices 420-700GB+ 12-20+ Thox.ai devices 200+

MagStack Cluster Specifications:

Minimum System Requirements

For Thox.ai Edge Device:

For MagStack Clusters:


Links & Resources


License

All Thox.ai models will be released under the MIT License, ensuring maximum flexibility for developers and organizations.

Planned Base Model Attributions:


Development Roadmap

Phase 1: Foundation (2025) ✅

Phase 2: Model Development (Q4 2025 - Q1 2026) 🚧

Phase 3: Initial Release (Q1-Q2 2026) 📅

Phase 4: MagStack & Enterprise (Q2-Q3 2026) 🔮

Phase 5: Advanced Capabilities (Q3-Q4 2026) 🚀


Contact & Support

Organization: Thox.ai LLC Website: https://www.thox.ai HuggingFace: https://huggingface.co/Thox-ai GitHub: https://github.com/Thox-ai

For inquiries, please visit our website or reach out through GitHub.


Last Updated: December 28, 2025