stream AI Integration
Move from AI responses to real execution by connecting stream and letting Pickaxe automate cross-tool workflows. Ship faster with fewer repetitive steps.
Capabilities
5 capabilities
synorb-stream-catalog
Retrieve the complete catalog of Synorb data streams available to your API token. Synorb provides machine-readable intelligence through three stream types: 1. DISCOVERY STREAMS: Summaries and structured extracts from human web content (news, podcasts, blogs, reports) 2. NARRATIVE STREAMS: Textual narratives generated from numerical/statistical data sources 3. RESEARCH STREAMS: Research reports and analysis written specifically for machine consumption RETURNS: Array of stream objects with: - stream_id: Unique identifier (required for all other operations) - title, description, stream_class: Stream identification - subject_domains, cross_domains: Topical categorization - update_cadence: Refresh frequency - filters: Available query dimensions (varies by stream) - last_updated: Most recent content timestamp - volume metrics: Story counts over time periods WHEN TO USE: - Beginning any research task - discover relevant intelligence sources - User asks about available data or capabilities - Exploring new topic areas - Planning multi-stream queries BEST PRACTICES: - Start here when uncertain which streams are relevant - Examine 'filters' to understand each stream's query dimensions - Match user intent to stream descriptions and domains - Note: Catalog grows continuously - don't hardcode stream assumptions
synorb-stream-details
Fetch deep metadata and configuration for a specific stream. Reveals the internal structure, available filters with exact values, content schema, and usage patterns for a single stream. REQUIRED PARAMETER: - stream_id: Identifier from synorb-stream-catalog RETURNS: Extended stream metadata including: - body_sections: Content structure and available fields - filters.allowed: Exact enumerated values for each filter dimension - nlq: Natural language query examples (human-readable usage patterns) - granularity: Scope specifications (geographic, temporal, entity-level) - provenance: Data pipeline and processing information - rights: Licensing and data sensitivity indicators WHEN TO USE: - After identifying a target stream from catalog - Need exact filter values before querying (e.g., which cities, organizations, topics are available) - Planning complex filtered queries - Understanding content structure for parsing - User asks about specific stream capabilities BEST PRACTICES: - Call this before synorb-stream-stories to get valid filter values - Cache results - stream schemas change infrequently - Use 'nlq' examples as query templates - Check 'body_sections' to know what fields will be returned
synorb-stream-stories
Retrieve stream content (stories) with filtering, date ranges, and pagination. This is your primary data retrieval tool - use it to fetch intelligence from any single stream with optional key-value filtering. REQUIRED PARAMETERS: - stream_id: Target stream identifier (single stream only) - Date range (one pair required): * published_date_from + published_date_to (YYYY-MM-DD format) OR * created_on_from + created_on_to (YYYY-MM-DD format) OPTIONAL PARAMETERS: - filters: Array of simple key-value filter objects * Format: [{"key": "dimension", "value": "filter_value"}] * Use synorb-stream-details to discover valid keys and allowed values * All filters must match (implicit AND behavior) - page_size: Results per page (max 200, no default - recommend 50-100 to start) - page_num: Page index for pagination, 0-based (default 0) - body_sections: Array of specific content sections to return (optional) * Format: ["headline", "summary", "analysis"] * Discovers what sections are available: 1. Call synorb-stream-details for your stream_id 2. Look for the 'body_sections' field in the response 3. Use those exact section names in your query * If omitted, returns complete story.body with all available sections * Use this to optimize response size and speed when you only need specific parts * Invalid section names are silently ignored (no error, just not returned) * Common sections by stream type: - Discovery streams: "headline", "summary", "entities", "quotes", "key_points", "links" - Research streams: "executive_summary", "methodology", "findings", "analysis", "conclusions" - Narrative streams: "narrative", "body", "context", "interpretation" * Example: body_sections: ["headline", "summary"] returns only those two sections RETURNS: - stories[]: Array of content objects * story_id: Unique identifier for this story * published_date: When the story was published (YYYY-MM-DD) * created_on: When the story was created in Synorb system (ISO timestamp) * filters: Filter dimensions that matched this story * evidence_ref.source_urls: Array of original source URLs (ALWAYS cite these in your outputs) * story.body: Structured content (format varies by stream - can be JSON, markdown, or structured text) * story.body_sections: Array showing which sections are included in this story's body - pagination: Navigation metadata * total_count: Total stories matching your query across all pages * page_num: Current page number (0-indexed) * page_size: Number of stories per page * next: Next page number (null if you're on the last page) * prev: Previous page number (null if you're on the first page) WHEN TO USE: - Retrieving stories from a single stream - User requests specific information from a stream - Building reports, analysis, or derivative content - Monitoring recent activity in a stream - Filtering content by specific dimensions (topic, region, company, etc.) - Simple exploration of stream content CRITICAL REQUIREMENTS: - MUST provide date ranges - no default behavior - Date ranges required even if searching by other filters - MUST provide both 'from' AND 'to' dates (no open-ended ranges) - Date format must be YYYY-MM-DD - ALWAYS cite source_urls from evidence_ref in your outputs PRACTICAL EXAMPLES: Example 1 - Basic query with full content: { "stream_id": "42", "published_date_from": "2025-01-01", "published_date_to": "2025-01-31", "page_size": 50 } → Returns all stories from stream 42 in January with complete body content Example 2 - Query with simple filters: { "stream_id": "15", "published_date_from": "2025-01-15", "published_date_to": "2025-01-27", "filters": [{"key": "topic", "value": "artificial_intelligence"}, {"key": "region", "value": "north_america"}] } → Returns stories that match BOTH topic=AI AND region=North America (implicit AND) Example 3 - Optimized query requesting only specific sections: { "stream_id": "8", "published_date_from": "2025-01-20", "published_date_to": "2025-01-27", "body_sections": ["headline", "summary"], "page_size": 100 } → Returns stories but only includes headline and summary sections (smaller, faster response) Example 4 - Pagination through large result set: { "stream_id": "23", "published_date_from": "2025-01-01", "published_date_to": "2025-01-31", "page_size": 50, "page_num": 2 } → Returns page 3 (0-indexed) of results with 50 stories per page BEST PRACTICES: - Start with recent dates (last 30 days) unless user specifies otherwise - Use page_size=50-100 initially for exploration - Always call synorb-stream-details first to: * Get valid filter keys and allowed values * Discover available body_sections for the stream * Understand the stream's content structure - Use body_sections to optimize performance: * Request only ["headline", "summary"] for preview/dashboard views * Request full body only when detailed content is needed * Reduces bandwidth and speeds up response time - For empty or unexpected results: 1. Expand date range 2. Remove or relax filters one by one 3. Try alternate date field (published_date vs created_on) 4. Check stream was active during date range using synorb-stream-catalog metrics 5. Verify filter keys/values are correct (check for typos) - Parse story.body carefully - format varies by stream (JSON, markdown, structured text) - ALWAYS cite source_urls in your outputs - this is the original source attribution - Cache responses when working with historical/static date ranges STREAM TYPES BEHAVIOR: - Discovery streams: Often rich in quotes, entities, links; sections like "headline", "summary", "entities", "quotes" - Narrative streams: Focus on body text narratives derived from data; typically "narrative" or "body" sections - Research streams: Structured analysis sections with methodology; sections like "executive_summary", "methodology", "findings", "analysis" RECOMMENDED QUERY WORKFLOW: 1. synorb-stream-catalog → Find relevant streams (get stream_id) 2. synorb-stream-details → Understand available filters and body_sections for that stream 3. synorb-stream-stories → Execute filtered query with appropriate body_sections 4. Iterate based on results (adjust filters, date ranges, or sections as needed) WHEN TO PAGINATE: - If total_count > page_size, more results are available - Use pagination.next to get the next page: set page_num to that value - Continue until pagination.next is null (indicates last page) - Example: If total_count=500 and page_size=100, you'll need 5 requests (page_num: 0, 1, 2, 3, 4) BODY_SECTIONS DISCOVERY WORKFLOW: 1. Call synorb-stream-details with your target stream_id 2. Examine the response for 'body_sections' field - this lists available sections 3. Use those exact section names in your synorb-stream-stories query 4. If body_sections field is empty or not present, request full body (omit parameter) WHY USE BODY_SECTIONS: - Faster API responses (less data to transfer) - Reduced token usage when working with LLMs - Better for building lightweight interfaces (dashboards, previews, lists) - When you only need summaries/headlines, not full content - Iterative exploration: get headlines first, then fetch full stories for interesting ones
synorb-stream-stories-advanced
Search for stream stories with flexible filtering, date ranges, and pagination. This endpoint performs advanced content retrieval across one or multiple streams and supports logical filter grouping (AND/OR) for precise queries. WHEN TO USE THIS TOOL: - Retrieving stories from one or more streams - Need AND/OR filter logic: "Find stories about (Tesla OR SpaceX) AND (funding)" - Combining multiple conditions: "Show AI stories in Europe OR crypto stories in Asia" - Complex filter requirements with logical grouping - Building reports, dashboards, or analysis requiring filtered content - Simple queries work too - filters are optional REQUIRED PARAMETERS: - stream_ids: Array of stream identifiers (numbers only) * Single stream: [123] * Multiple streams: [123, 456, 789] * Must contain at least one numeric ID - Date range (one pair required): * published_date_from + published_date_to (YYYY-MM-DD format) OR * created_on_from + created_on_to (YYYY-MM-DD format) OPTIONAL PARAMETERS: - filters: Array of filter objects with logical operators * Format: [{"key": "dimension", "operator": "AND|OR", "value": "filter_value"}] * Use synorb-stream-details first to get valid keys and values * All filters must include the "operator" field - page_size: Results per page (default 50, max 200) - page_num: Page index for pagination, 0-based (default 0) - body_sections: Array of specific content sections to return (optional) * Format: ["headline", "summary", "analysis"] * Use synorb-stream-details to discover available sections for each stream * Look for 'body_sections' field in the stream details response * If omitted, returns complete story.body with all available sections * Use this to reduce payload size when you only need specific parts * Invalid section names are silently ignored (no error thrown) * Common sections include: "headline", "summary", "analysis", "key_points", "entities", "quotes" * Section availability varies by stream type and content structure UNDERSTANDING FILTER OPERATORS: operator="AND" → ALL conditions must match (intersection) Example: [{"key": "topic", "operator": "AND", "value": "ai"}, {"key": "region", "operator": "AND", "value": "europe"}] Result: Stories that are BOTH about AI AND in Europe operator="OR" → ANY condition can match (union) Example: [{"key": "company", "operator": "OR", "value": "apple"}, {"key": "company", "operator": "OR", "value": "microsoft"}] Result: Stories mentioning Apple OR Microsoft (or both) MIXING OPERATORS: - All filters with operator="AND" must ALL match together - All filters with operator="OR" means ANY can match - For complex nested logic like "(A AND B) OR C", run separate queries and combine results PRACTICAL EXAMPLES: Example 1 - Single stream with AND logic: {"stream_ids": [42], "published_date_from": "2025-01-01", "published_date_to": "2025-01-31", "filters": [{"key": "topic", "operator": "AND", "value": "fintech"}, {"key": "stage", "operator": "AND", "value": "series_b"}]} → Returns fintech stories that are ALSO series B stage Example 2 - Single stream with OR logic: {"stream_ids": [42], "published_date_from": "2025-01-01", "published_date_to": "2025-01-31", "filters": [{"key": "investor", "operator": "OR", "value": "sequoia"}, {"key": "investor", "operator": "OR", "value": "a16z"}]} → Returns stories mentioning Sequoia OR Andreessen Horowitz Example 3 - Multiple streams with filters: {"stream_ids": [10, 15, 23], "published_date_from": "2025-01-15", "published_date_to": "2025-01-31", "filters": [{"key": "region", "operator": "AND", "value": "asia"}], "page_size": 100} → Returns stories from three streams, all filtered to Asia region Example 4 - Using body_sections to reduce payload: {"stream_ids": [5], "published_date_from": "2025-01-20", "published_date_to": "2025-01-27", "body_sections": ["headline", "summary"]} → Returns only headline and summary, omitting full analysis/content (faster, smaller response) Example 5 - No filters (just multi-stream): {"stream_ids": [5, 8], "published_date_from": "2025-01-20", "published_date_to": "2025-01-27"} → Returns all stories from both streams in date range (filters are optional) RETURNS: {"stories": [{"story_id": "uuid-string", "stream_id": 123, "published_date": "2025-01-15", "created_on": "2025-01-15T10:30:00Z", "filters": {...}, "body": {...}, "body_sections": ["headline", "summary", "analysis"]}], "pagination": {"total_count": 150, "page_num": 0, "page_size": 50, "next": 1, "prev": null}} CRITICAL REQUIREMENTS: ✅ stream_ids must be an array of numbers: [123] or [123, 456] ✅ Date range is mandatory - must provide both from AND to dates ✅ Every filter must include "operator": "AND" or "OR" ✅ Always call synorb-stream-details first to validate filter keys and values ✅ Date format must be YYYY-MM-DD COMMON MISTAKES TO AVOID: ❌ Using string IDs: stream_ids: ["123"] ✅ Use numbers: stream_ids: [123] ❌ Forgetting operator in filters ✅ Always include "operator": "AND" or "OR" in every filter ❌ Not validating filters beforehand ✅ Call synorb-stream-details to get valid filter keys and allowed values ❌ Omitting date ranges ✅ Always provide both from and to dates ❌ Requesting body_sections without checking what's available ✅ Call synorb-stream-details first to see available sections for each stream BEST PRACTICES: - Start with page_size=50-100 for exploration, increase for batch exports - Always define explicit date ranges (reduces payload and improves performance) - Call synorb-stream-details before querying to get: * Valid filter dimensions and allowed values * Available body_sections for the stream * Stream structure and content format - Use body_sections parameter to: * Reduce response size when you only need headlines/summaries * Speed up queries when full content isn't needed * Build lightweight dashboards or preview interfaces - For empty or unexpected results: 1. Expand date range (check stream was active in that period) 2. Try switching operator from AND to OR (broader matching) 3. Verify filter keys/values are correct (check for typos) 4. Remove filters one by one to identify the problematic condition - Stories are complete in the response - no additional API calls needed - Use pagination metadata (next/prev) for navigating large result sets - Cache responses when working with historical/static date ranges STREAM TYPE BEHAVIOR: - Discovery Streams: Frequent updates, rich metadata, common sections include "headline", "summary", "entities", "quotes", "links" - Research Streams: Long-form content, sections like "executive_summary", "methodology", "analysis", "conclusions", "recommendations" - Narrative Streams: Continuous narrative text, typically "body" or "narrative" sections, minimal structural breakdown RECOMMENDED WORKFLOW: 1. synorb-stream-catalog → Identify relevant streams (get stream IDs) 2. synorb-stream-details → Check available filters, allowed values, and body_sections for those streams 3. synorb-stream-stories-search → Execute search with logical filter combinations 4. Parse results directly (stories contain complete content) 5. Iterate: Adjust filters, date ranges, operators, or body_sections based on results WHEN TO PAGINATE: - total_count > page_size indicates more results available - Use pagination.next to get the next page: set page_num to that value - Continue until pagination.next is null (no more pages) - Example: If total_count=500 and page_size=100, you'll need 5 requests (page_num: 0,1,2,3,4)
synorb-profile
Retrieve authenticated account metadata and access configuration. NO PARAMETERS REQUIRED RETURNS: - Account identifiers - API token scope and permissions - Usage limits and quotas - Configuration settings WHEN TO USE: - User asks about account or access level - Debugging authentication - Verifying token validity BEST PRACTICES: - Rarely needed for standard intelligence queries - Call only when specifically required - Not part of typical stream query workflow
Get Started
Click any tool below to instantly start building AI tools that enhance your workflow and productivity
Smart Scheduling Assistant
Build an AI scheduler that finds optimal meeting times and handles booking requests automatically.
Event Management System
Create tools that manage events, send reminders, and coordinate attendees with AI assistance.
Meeting Prep Generator
Automatically generate meeting agendas, talking points, and preparation materials before each session.
Availability Optimizer
Optimize your calendar by analyzing patterns and suggesting better time blocks for productivity.
Recurring Event Manager
Automate recurring events, reminders, and follow-ups with intelligent scheduling logic.
Calendar Integration Hub
Connect multiple calendars and sync events across platforms with AI-powered conflict resolution.
Related Actions
Excel
excel
Microsoft Excel is a powerful spreadsheet application for data analysis, calculations, and visualization, enabling users to organize and process data with formulas, charts, and pivot tables
11 uses
Youtube
youtube
YouTube is a video-sharing platform with user-generated content, live streaming, and monetization opportunities, widely used for marketing, education, and entertainment
366 uses
Instagram is a social media platform for sharing photos, videos, and stories. Only supports Instagram Business and Creator accounts, not Instagram Personal accounts.
1.66k uses
Linkup
linkup
Search the web in real time to get trustworthy, source-backed answers. Find the latest news and comprehensive results from the most relevant sources. Use natural language queries to quickly gather facts, citations, and context.
4.93k uses
Airtable
airtable
Airtable merges spreadsheet functionality with database power, enabling teams to organize projects, track tasks, and collaborate through customizable views, automation, and integrations for data management
1.27k uses
GitHub
github
GitHub is a code hosting platform for version control and collaboration, offering Git-based repository management, issue tracking, and continuous integration features
115 uses
Explore Pickaxe Templates
Get started faster with pre-built templates. Choose from our library of ready-to-use AI tools and customize them for your needs.
Ready to Connect stream?
Build your AI tool with this MCP server in the Pickaxe builder.
Build with Pickaxe