All Models
Perplexity
Perplexity

Sonar Deep Research

Specialized model for comprehensive research and deep information analysis.

researchsearchadvanced
Provider
Perplexity
Median cost per request
$0.028
Input price
$2.00 / 1M tokens
Output price
$8.00 / 1M tokens
Strengths
research, search, advanced
Modalities
text, image
Context window
127k tokens
Max output
127k tokens
Popularity
Top 43% (31 of 53)

Build with Sonar Deep Research

Launch a project in a few clicks

1. Open the builder with Sonar Deep Research preselected.
2. Pick a template or paste your prompt.
3. Ship to web, API, or embed.
PopularityTrend

Popularity trends show adoption over time

Rising popularity indicates growing trust; declines may signal newer alternatives

  • Popular = trusted by many builders for everyday tasks
  • Try a cheaper model on your prompt—if outputs match, save money
  • After adding Actions or tools, test again—costs can change a lot

Popularity Trend

Current popularity ranking

Less popularMore popular
Top 43%
More popular than 43% of 53 models
Top 25% among Perplexity models
CostComparison

Compare cost against similar models

Use cheaper alternatives to validate if premium pricing is worth it

  • Test your prompt on cheaper models—if outputs match, save money
  • Premium models shine at complex reasoning or long-context tasks
  • After adding Actions or tools, check costs again—they can change a lot

Cost Comparison

Model vs Perplexity average and nearby models

Model: Sonar Deep ResearchPerplexity average
1.3× the Perplexity average
PopularityProvider

Compare within the same provider

Use cheaper or pricier neighbors to decide if the premium is justified

  • If outputs match on your prompt, pick the cheaper one
  • Premium models excel at specific things like reasoning or long context
  • Test again after adding Actions—costs and quality can change

Provider Popularity Split

Top peers in Perplexity by total uses (higher = more popular)

Your model represents 35.8% of Perplexity usage
Showing top 4 peers by total uses.
TokensUsage

Token usage shows typical workload patterns

Most runs use 1-5k tokens for conversations; higher counts indicate complex prompts or RAG

  • Low tokens = quick responses; high tokens = detailed analysis or RAG
  • Compare input/output to see if this model handles quick or long tasks
  • Shorter prompts = lower costs—test variations in Pickaxe

Token Usage Distribution

Most requests use 25k-50k tokens

Avg Input
24k
Median: 17k
Avg Output
6k
Median: 3k
0-1k
1k-5k
5k-10k
10k-25k
25k-50k
50k-100k
100k+
TrendsAdoption

Usage trends reflect adoption and trust

Growing trends indicate increasing adoption; declines may signal migration to alternatives

  • Rising usage = builders trust this model for real work
  • Spikes often mean new features, use cases, or promotions
  • If trends drop, try newer or cheaper alternatives in Pickaxe

Market Share Distribution— Rank #31 of 53

This model's market share compared to all models0.21% market share

Min: 0.00%
Q1: 0.09%
Median: 0.29%
Q3: 1.68%
Max: 14.98% (outlier)
This model: 0.21%
SpeedLatency

Speed metrics show real-world response times

Latency affects user experience; lower latency means faster interactions

  • Average = typical response time; worst case = slowest 5% of requests
  • First response time = how fast users see something
  • Compare speed across models in Pickaxe to find the best balance

Performance

Response time and streaming metrics

Response Time
99.9s average99.9s
Typical: 76.0s • Worst case: 249.2s
Streaming
First response in 23.5s23.5s
Time until you see the first output
Streaming time69.4s
Average time to stream complete response

Real builder experiences

Was this model actually good in Pickaxe?

Share wins, failures, and cost/performance tradeoffs with other builders. The more real-world runs, the better the guidance.

Related Models

Sonar

General-purpose model optimized for search and information retrieval.

Sonar Pro

Enhanced model with advanced search capabilities and improved accuracy.

Sonar Reasoning Pro

Reasoning-focused model combining search with analytical capabilities.