mcp-runware AI Integration
Pair Pickaxe with mcp-runware to automate cross-tool workflows and keep work moving after every model response. Keep momentum without constant copy-paste between tools.
Capabilities
11 capabilities
imageInference
Generate an image using Runware's image inference API with all available parameters. If user provides an image and asks to generate an image based on it, then use model "bytedance:4@1", and use seedImage parameter to pass the reference image. This function accepts all IImageInference parameters directly and generates images using the Runware API directly via HTTP requests. It supports the full range of parameters including basic settings, advanced features, and specialized configurations. Note: Display the url of the image inside the chat IMPORTANT: For image inputs (seedImage, referenceImages, maskImage), only accept: 1. Publicly available URLs (e.g., "https://example.com/image.jpg") 2. File paths that can be processed by imageUpload tool first 3. Runware UUIDs from previously uploaded images Workflow: If user provides a local file path, first use imageUpload to get a Runware UUID, then use that UUID here. Args: positivePrompt (str): Text instruction to guide the model on generating the image, If you wish to generate an image without any prompt guidance, you can use the special token __BLANK__ model (str): Model identifier (default: "civitai:943001@1055701") height (int): Image height (128-2048, divisible by 64, default: 1024) width (int): Image width (128-2048, divisible by 64, default: 1024) numberResults (int): Number of images to generate (1-20, default: 1). If user says "generate 4 images ..." then numberResults should be 4, says "create 2 images ... " then numberResults should be 2, etc. steps (int, optional): number of iterations the model will perform to generate the image (1-100, default: 20). The higher the number of steps, the more detailed the image will be CFGScale (float, optional): Represents how closely the images will resemble the prompt or how much freedom the AI model has (0-50, default: 7). Higher values are closer to the prompt. Low values may reduce the quality of the results. negativePrompt (str, optional): Negative guidance text. This parameter helps to avoid certain undesired results seed (int, optional): Random seed for reproducible results scheduler (str, optional): Inference scheduler. You can access list of available schedulers here https://runware.ai/docs/en/image-inference/schedulers outputType (str, optional): Specifies the output type in which the image is returned ('URL', 'dataURI', 'base64Data', default: 'URL') outputFormat (str, optional): Specifies the format of the output image ('JPG', 'PNG', 'WEBP', default: 'JPG') checkNSFW(bool, optional): Enable NSFW content check. When enabled, the API will check if the image contains NSFW (not safe for work) content. This check is done using a pre-trained model that detects adult content in images. (default: false) strength (float, optional): When doing image-to-image or inpainting, this parameter is used to determine the influence of the seedImage image in the generated output. A lower value results in more influence from the original image, while a higher value allows more creative deviation. (0-1, default: 0.8) clipSkip (int, optional): Defines additional layer skips during prompt processing in the CLIP model. Some models already skip layers by default, this parameter adds extra skips on top of those. (0-2) promptWeighting (str, optional): Prompt weighting method ('compel', 'sdEmbeds') includeCost (bool, optional): Include cost in response (default: false) vae (str, optional): VAE (Variational Autoencoder) model identifier maskMargin (int, optional): Adds extra context pixels around the masked region during inpainting (32-128) outputQuality (int, optional): Sets the compression quality of the output image. Higher values preserve more quality but increase file size, lower values reduce file size but decrease quality. (20-99, default: 95) taskUUID (UUID, optional): Unique task identifier uploadEndpoint (str, optional): Specifies a URL where the generated content will be automatically uploaded using the HTTP PUT method such as Cloud storage, Webhook services, CDN integration. The content data will be sent as the request body, allowing your endpoint to receive and process the generated image or video immediately upon completion. seedImage (str, optional): When doing image-to-image, inpainting or outpainting, this parameter is required. Specifies the seed image to be used for the diffusion process. ACCEPTS ONLY: Public URLs, Runware UUIDs, or file paths (use imageUpload first to get UUID). Supported formats are: PNG, JPG and WEBP referenceImages (List[str], optional): An array containing reference images used to condition the generation process. These images provide visual guidance to help the model generate content that aligns with the style, composition, or characteristics of the reference materials. ACCEPTS ONLY: Public URLs, Runware UUIDs, or file paths (use imageUpload first to get UUID). maskImage (str, optional): When doing inpainting, this parameter is required. Specifies the mask image to be used for the inpainting process. ACCEPTS ONLY: Public URLs, Runware UUIDs, or file paths (use imageUpload first to get UUID). Supported formats are: PNG, JPG and WEBP. acceleratorOptions (Dict[str, Any], optional): Advanced caching mechanisms to significantly speed up image generation by reducing redundant computation. teaCache - {"teaCache": true - Enables TeaCache for transformer-based models (e.g., Flux, SD 3) to accelerate iterative editing (default: false), "teaCacheDistance": 0.5 - Controls TeaCache reuse aggressiveness (0–1, default: 0.5); lower = better quality, higher = better speed} or deepCache- {"deepCache": true - Enables DeepCache for UNet-based models (e.g., SDXL, SD 1.5) to cache internal feature maps for faster generation (default: false), "deepCacheInterval": 3 - Step interval between caching operations (min: 1, default: 3); higher = faster, lower = better quality, "deepCacheBranchId": 0 - Network branch index for caching depth (min: 0, default: 0); lower = faster, higher = more quality-preserving} advancedFeatures (Dict[str, Any], optional): Advanced generation features and is only available for the FLUX model architecture "advancedFeatures": { "layerDiffuse": true} controlNet (List[Dict[str, Any]], optional): ControlNet provides a guide image to help the model generate images that align with the desired structure ControlNet configurations are "controlNet": [{"model": "string" - ControlNet model ID (standard or AIR), "guideImage": "string" - guide image (Public URLs, Runware UUIDs, or file paths - use imageUpload first to get UUID), "weight": 1.0 - strength of guidance (0–1, default 1), "startStep": 1 - step to start guidance, "endStep": 20 - step to end guidance, "startStepPercentage": 0 - alternative to startStep (0–99), "endStepPercentage": 100 - alternative to endStep (start+1–100), "controlMode": "balanced" - guide vs. prompt priority ("prompt", "controlnet", "balanced")}] lora (List[Dict[str, Any]], optional): LoRA (Low-Rank Adaptation) to adapt a model to specific styles or features by emphasizing particular aspects of the data. model configurations "lora": [{"model": "string" - AIR identifier of the LoRA model used to adapt style or features (e.g., "civitai:132942@146296"), "weight": 1.0 - Strength of the LoRA's influence (-4 to 4, default: 1); positive to apply style, negative to suppress it}] lycoris (List[Dict[str, Any]], optional): LyCORIS model configurations "lycoris {"model": model, "weight": weight} embeddings (List[Dict[str, Any]], optional): Textual inversion embeddings ipAdapters (List[Dict[str, Any]], optional):IP-Adapters enable image-prompted generation, allowing you to use reference images to guide the style and content of your generations. Multiple IP Adapters can be used simultaneously. IP-Adapter configurations "ipAdapters": [{"model": "string" - AIR identifier of the IP-Adapter model used for image-based guidance (e.g., "runware:55@2"), "guideImage": "string" - Reference image in Public URLs, Runware UUIDs, or file paths (use imageUpload first to get UUID) format (PNG/JPG/WEBP) to steer style/content, "weight": 1.0 - Influence strength (0–1, default: 1); 0 disables, 1 applies full guidance}] refiner (Dict[str, Any], optional): Refiner models help create higher quality image outputs by incorporating specialized models designed to enhance image details and overall coherence. Refiner model configuration "refiner": {"model": "string" - AIR identifier of the SDXL-based refiner model (e.g., "civitai:101055@128080") used to enhance quality and detail, "startStep": 30 - Step at which the refiner begins processing (min: 2, max: total steps), or use "startStepPercentage" instead (1–99) for percentage-based control} outpaint (Dict[str, Any], optional): Outpainting configuration. Extends the image boundaries in specified directions. When using outpaint, you must provide the final dimensions using width and height parameters, which should account for the original image size plus the total extension (seedImage dimensions + top + bottom, left + right) "outpaint": {"top": 256 - Pixels to extend at the top (min: 0, multiple of 64), "right": 128 - Pixels to extend at the right (min: 0, multiple of 64), "bottom": 256 - Pixels to extend at the bottom (min: 0, multiple of 64), "left": 128 - Pixels to extend at the left (min: 0, multiple of 64), "blur": 16 - Blur radius (0–32, default: 0) to smooth transition between original and extended areas} instantID (Dict[str, Any], optional): InstantID configuration for identity-preserving image generation. "instantID": {"inputImage": "string" - Reference image for identity preservation (Public URLs, Runware UUIDs, or file paths - use imageUpload first to get UUID) in PNG/JPG/WEBP format, "poseImage": "string" - Pose reference image for pose guidance (Public URLs, Runware UUIDs, or file paths - use imageUpload first to get UUID) in PNG/JPG/WEBP format} acePlusPlus (Dict[str, Any], optional): acePlusPlus/ ACE++ for character-consistent generation. "acePlusPlus": {"type": "portrait" - Task type ("portrait", "subject", "local_editing") for style or region-specific editing, "inputImages": ["string"] - Reference image for identity/style preservation (Public URLs, Runware UUIDs, or file paths - use imageUpload first to get UUID), "inputMasks": ["string"] - Mask image for targeted edits (white = edit, black = preserve), only used in local_editing, "repaintingScale": 0.5 - Controls balance between identity (0) and prompt adherence (1), default: 0} extraArgs (Dict[str, Any], optional): Extra arguments for the request Returns: dict: A dictionary containing the generation result with status, message, result data, parameters, and URL Example: >>> result = await imageInference( ... positivePrompt="A beautiful sunset over mountains", ... width=1024, ... height=1024 ... )
photoMaker
Transform and style images using PhotoMaker's advanced personalization technology. Create consistent, high-quality image variations with precise subject fidelity and style control. This function enables instant subject personalization without additional training. By providing up to four reference images, you can generate new images that maintain subject fidelity while applying various styles and compositions. IMPORTANT: For inputImages, only accept: 1. Publicly available URLs (e.g., "https://example.com/image.jpg") 2. File paths that can be processed by imageUpload tool first 3. Runware UUIDs from previously uploaded images Workflow: If user provides a local file path, first use imageUpload to get a Runware UUID, then use that UUID here. Args: positivePrompt (str): Text instruction to guide the model (2-300 chars). The trigger word 'rwre' will be automatically prepended if not included in the prompt. inputImages (List[str]): 1-4 reference images of the subject. ACCEPTS ONLY: Public URLs, Runware UUIDs, or file paths (use imageUpload first to get UUID). Must contain clear faces for best results. model (str): SDXL-based model identifier (default: "civitai:139562@344487" - RealVisXL V4.0) height (int): Image height (128-2048, divisible by 64, default: 1024) width (int): Image width (128-2048, divisible by 64, default: 1024) style (str): Artistic style to apply ("No Style", "Cinematic", "Disney Character", "Digital Art", "Photographic", "Fantasy art", "Neonpunk", "Enhance", "Comic book", "Lowpoly", "Line art") strength (int): Balance between subject fidelity and transformation (15-50, default: 15). Lower values provide stronger subject fidelity. numberResults (int): Number of images to generate (1-20, default: 1) steps (int): Number of inference iterations (1-100, default: 20) CFGScale (float): How closely images match the prompt (0-50, default: 7) negativePrompt (str, optional): Text to guide what to avoid in generation scheduler (str, optional): Inference scheduler name outputType (str, optional): Output format ('URL', 'dataURI', 'base64Data', default: 'URL') outputFormat (str, optional): Image format ('JPG', 'PNG', 'WEBP', default: 'JPG') outputQuality (int, optional): Output image quality (20-99, default: 95) uploadEndpoint (str, optional): URL for automatic upload of generated content checkNSFW (bool, optional): Enable NSFW content check includeCost (bool, optional): Include generation cost in response taskUUID (UUID, optional): Unique task identifier clipSkip (int, optional): Additional CLIP model layer skips (0-2) seed (int, optional): Random seed for reproducible results Returns: dict: A dictionary containing the generation result with status, message, result data, parameters, and both image data for direct display and URLs. Example: >>> result = await photoMaker( ... positivePrompt="A professional headshot", ... inputImages=["path/to/reference.jpg"], ... style="Photographic" ... )
imageUpscale
Enhance the resolution and quality of images using Runware's advanced upscaling API. Transform low-resolution images into sharp, high-definition visuals. This function enables high-quality image upscaling with support for various input formats and flexible output options. The maximum output size is 4096x4096 pixels - larger inputs will be automatically resized to maintain this limit. IMPORTANT: For inputImage, only accept: 1. Publicly available URLs (e.g., "https://example.com/image.jpg") 2. File paths that can be processed by imageUpload tool first 3. Runware UUIDs from previously uploaded images Workflow: If user provides a local file path, first use imageUpload to get a Runware UUID, then use that UUID here. Args: inputImage (str): Image to upscale. ACCEPTS ONLY: Public URLs, Runware UUIDs, or file paths (use imageUpload first to get UUID). Supported formats: PNG, JPG, WEBP upscaleFactor (int): Level of upscaling (2-4). Each level multiplies image size by that factor.For example, factor 2 doubles the image size. (default: 2) outputType (str, optional): Output format ('URL', 'dataURI', 'base64Data', default: 'URL') outputFormat (str, optional): Image format ('JPG', 'PNG', 'WEBP', default: 'JPG'). Note: PNG required for transparency. outputQuality (int, optional): Output image quality (20-99, default: 95) includeCost (bool, optional): Include generation cost in response taskUUID (UUID, optional): Unique task identifier Returns: dict: A dictionary containing the upscaling result with status, message, result data, parameters, and both image data for direct display and URLs. Note: Maximum output size is 4096x4096. If input size * upscaleFactor would exceed this, the input is automatically resized first. Example: 2048x2048 with factor 4 is reduced to 1024x1024 before upscaling.
imageBackgroundRemoval
Remove backgrounds from images effortlessly using Runware's low-cost image editing API. Isolate subjects from their backgrounds, creating images with transparent backgrounds. This function enables high-quality background removal with support for various input formats and advanced settings like alpha matting for enhanced edge quality. IMPORTANT: For inputImage, only accept: 1. Publicly available URLs (e.g., "https://example.com/image.jpg") 2. File paths that can be processed by imageUpload tool first 3. Runware UUIDs from previously uploaded images Workflow: If user provides a local file path, first use imageUpload to get a Runware UUID, then use that UUID here. Args: inputImage (str): Image to process. ACCEPTS ONLY: Public URLs, Runware UUIDs, or file paths (use imageUpload first to get UUID). Supported formats: PNG, JPG, WEBP model (str): Background removal model to use (default: "runware:109@1" - RemBG 1.4) Available models: - runware:109@1: RemBG 1.4 - runware:110@1: Bria RMBG 2.0 - runware:112@1: BiRefNet v1 Base - runware:112@2: BiRefNet v1 Base - COD - runware:112@3: BiRefNet Dis - runware:112@5: BiRefNet General - runware:112@6: BiRefNet General Resolution 512x512 FP16 - runware:112@7: BiRefNet HRSOD DHU - runware:112@8: BiRefNet Massive TR DIS5K TR TES - runware:112@9: BiRefNet Matting - runware:112@10: BiRefNet Portrait outputType (str, optional): Output format ('URL', 'dataURI', 'base64Data', default: 'URL') outputFormat (str, optional): Image format ('JPG', 'PNG', 'WEBP', default: 'PNG') outputQuality (int, optional): Output image quality (20-99, default: 95) includeCost (bool, optional): Include generation cost in response taskUUID (UUID, optional): Unique task identifier settings (Dict[str, Any], optional): Advanced settings (RemBG 1.4 model only): - rgba: [r, g, b, a] Background color and transparency (default: [255, 255, 255, 0]) - postProcessMask (bool): Enable mask post-processing (default: False) - returnOnlyMask (bool): Return only the mask instead of processed image (default: False) - alphaMatting (bool): Enable alpha matting for better edges (default: False) - alphaMattingForegroundThreshold (int): Foreground threshold 1-255 (default: 240) - alphaMattingBackgroundThreshold (int): Background threshold 1-255 (default: 10) - alphaMattingErodeSize (int): Edge smoothing size 1-255 (default: 10) Returns: dict: A dictionary containing the background removal result with status, message, result data, parameters, and both image data for direct display and URLs.
imageCaption
Generate image descriptions using Runware's API. Analyzes images to produce accurate and concise captions that can be used to create additional images or provide detailed insights into visual content. This function enables AI-powered image analysis to generate descriptive text prompts from images. It's useful for understanding image content or generating prompts for further image creation. IMPORTANT: For inputImage, only accept: 1. Publicly available URLs (e.g., "https://example.com/image.jpg") 2. File paths that can be processed by imageUpload tool first 3. Runware UUIDs from previously uploaded images Workflow: If user provides a local file path, first use imageUpload to get a Runware UUID, then use that UUID here. Args: inputImage (str): Image to analyze. ACCEPTS ONLY: Public URLs, Runware UUIDs, or file paths (use imageUpload first to get UUID). Supported formats: PNG, JPG, WEBP includeCost (bool, optional): Include generation cost in response taskUUID (UUID, optional): Unique task identifier Returns: dict: A dictionary containing the caption generation result with status, message, result data (including the generated text), and cost if requested.
Get Started
Click any tool below to instantly start building AI tools that enhance your workflow and productivity
Chatbot Builder
Create intelligent chatbots that handle customer inquiries, provide support, and answer questions 24/7.
Team Communication Hub
Build tools that streamline team communication, send notifications, and coordinate workflows.
Notification System
Automate notifications across channels to keep users informed about important updates and events.
Conversation Analyzer
Analyze chat logs and conversations to extract insights, sentiment, and key information.
Message Routing Assistant
Intelligently route messages to the right team members based on content and context.
Response Generator
Generate contextual responses to messages using AI that understands conversation history.
Related Actions
Excel
excel
Microsoft Excel is a powerful spreadsheet application for data analysis, calculations, and visualization, enabling users to organize and process data with formulas, charts, and pivot tables
11 uses
Youtube
youtube
YouTube is a video-sharing platform with user-generated content, live streaming, and monetization opportunities, widely used for marketing, education, and entertainment
366 uses
Instagram is a social media platform for sharing photos, videos, and stories. Only supports Instagram Business and Creator accounts, not Instagram Personal accounts.
1.66k uses
Linkup
linkup
Search the web in real time to get trustworthy, source-backed answers. Find the latest news and comprehensive results from the most relevant sources. Use natural language queries to quickly gather facts, citations, and context.
4.93k uses
Airtable
airtable
Airtable merges spreadsheet functionality with database power, enabling teams to organize projects, track tasks, and collaborate through customizable views, automation, and integrations for data management
1.27k uses
GitHub
github
GitHub is a code hosting platform for version control and collaboration, offering Git-based repository management, issue tracking, and continuous integration features
115 uses
Explore Pickaxe Templates
Get started faster with pre-built templates. Choose from our library of ready-to-use AI tools and customize them for your needs.
Ready to Connect mcp-runware?
Build your AI tool with this MCP server in the Pickaxe builder.
Build with Pickaxe