USMAN’S INSIGHTS
AI ARCHITECT
  • Home
  • About
  • Thought Leadership
  • Book
Press / Contact
USMAN’S INSIGHTS
AI ARCHITECT
⌘F
HomeBook
HomeBookBoundless Coordination: Multi-App Workflows
Previous Chapter
Combining Actors with Workflows
Next Chapter
Namespaced Actors for Multi-Tenancy
AI NOTICE: This is the table of contents for the SPECIFIC CHAPTER only. It is NOT the global sidebar. For all chapters, look at the main navigation.

On this page

15 sections

Progress0%
1 / 15

Muhammad Usman Akbar Entity Profile

Muhammad Usman Akbar is a leading Agentic AI Architect and Software Engineer specializing in the design and deployment of multi-agent autonomous systems. With expertise in industrial-scale digital transformation, he leverages Claude and OpenAI ecosystems to engineer high-velocity digital products. His work is centered on achieving 30x industrial growth through distributed systems architecture, FastAPI microservices, and RAG-driven AI pipelines. Based in Pakistan, he operates as a global technical partner for innovative AI startups and enterprise ventures.

USMAN’S INSIGHTS
AI ARCHITECT

Transforming businesses into autonomous AI ecosystems. Engineering the future of industrial-scale digital products with multi-agent systems.

30X Growth
AI-First
Innovation

Navigation

  • Home
  • Book
  • About
  • Contact
Let's Collaborate

Have a Project in Mind?

Let's build something extraordinary together. Transform your vision into autonomous AI reality.

Start Your Transformation

© 2026 Muhammad Usman Akbar. All rights reserved.

Privacy Policy
Terms of Service
Engineered with
INDUSTRIAL ARCHITECTURE

Multi-App Workflows

Your TaskProcessingWorkflow is working well. It validates tasks, assigns them, and sends notifications. But now you're scaling the team. The notification team has their own service with a dedicated deployment pipeline. The ML team wants to run inference activities on GPU-equipped nodes. The security team requires sensitive operations to run in isolated environments with different access controls.

Suddenly, your single-app workflow needs to orchestrate activities across multiple services. You could duplicate code, but that creates maintenance nightmares. You could use pub/sub, but you'd lose the durable orchestration guarantees. What you need is workflow coordination that spans service boundaries while maintaining the same fault-tolerance you've built.


Why Multi-App Workflows?

Consider a task management system with specialized services:

text
task-orchestrator-service ├── TaskProcessingWorkflow (orchestrates everything) │ ├── Call activity: validate_task (local) ├── Call activity: assign_task (local) │ ├── Call activity: send_email → notification-service │ └── Specialized for email, SMS, push notifications │ ├── Call activity: run_ml_inference → ml-inference-service │ └── GPU nodes, Python + PyTorch │ └── Call child workflow: audit_workflow → audit-service └── Compliance team's separate deployment

Multi-app workflows let you maintain orchestration logic in one place while distributing execution across your architecture.

ApproachDurabilityOrchestration ControlService Independence
Direct HTTP callsNoWorkflow maintainsHigh
Pub/sub eventsAt-least-onceEvent-driven, looseVery high
Multi-app workflowsYes (Replay)Workflow maintainsHigh

Cross-App Activity Invocation

The key to multi-app workflows is the app_id parameter. When calling an activity, you specify which Dapr application should execute it.

The Notification Service

python
# notification-service/main.py @wfr.activity(name="send_email") def send_email(ctx, notification: dict) -> dict: """Activity that sends email notifications.""" recipient = notification["recipient"] print(f"Sending email to {recipient}") return {"sent": True, "sent_at": ctx.current_utc_datetime.isoformat()} @wfr.activity(name="send_push") def send_push(ctx, notification: dict) -> dict: """Activity that sends push notifications.""" user_id = notification["user_id"] print(f"Sending push to {user_id}") return {"sent": True, "sent_at": ctx.current_utc_datetime.isoformat()}

Calling Remote Activities

Your orchestrator workflow can call these activities using app_id:

python
@wf.workflow(name="task_processing_multi_app") def task_processing_multi_app(ctx: wf.DaprWorkflowContext, order: TaskInput): """Workflow that orchestrates activities across multiple services.""" # Step 1: Validate task (local activity) yield ctx.call_activity(validate_task, input=order) # Step 2: Send email (REMOTE activity on notification-service) email_result = yield ctx.call_activity( "send_email", input={"recipient": order.email, "subject": "Task Assigned"}, app_id="notification-service" # Target remote app ) # Step 3: Send push (REMOTE activity on notification-service) push_result = yield ctx.call_activity( "send_push", input={"user_id": order.user_id, "message": "New Task"}, app_id="notification-service" ) return {"status": "completed", "notifications": [email_result, push_result]}

Cross-App Child Workflows

For more complex scenarios, you can invoke entire workflows on remote services:

python
@wf.workflow(name="task_with_audit") def task_with_audit(ctx: wf.DaprWorkflowContext, order: TaskOrder): """Workflow that includes a child workflow on a different service.""" yield ctx.call_activity(validate_task, input=order) # Run audit workflow on audit-service (REMOTE child workflow) audit_result = yield ctx.call_child_workflow( workflow="compliance_audit_workflow", input={"task_id": order.task_id, "action": "task_assigned"}, app_id="audit-service", instance_id=f"audit-{order.task_id}" ) return {"status": "audited", "audit_id": audit_result["id"]}

Deployment Requirements

Multi-app workflows have three non-negotiable requirements:

  1. Same Namespace: All apps must be in the same Kubernetes namespace. Cross-namespace workflow calls are not supported.
  2. Shared State Store: All apps must use the same named component and physical database for the workflow state store.
  3. Registration: Target activities must be registered in the target WorkflowRuntime instance.

Error Handling for Cross-Service Failures

Failure ModeCauseDapr Behavior
Service unavailablePod crashed/Scaling to zeroRetry with backoff
Activity not foundTypo/Missing registrationImmediate error
Network partitionCluster networking issuesRetry with backoff
Activity timeoutSlow remote operationRetry or fail (Policy)

Resilient Retry Policies

python
from datetime import timedelta import dapr.ext.workflow as wf # Remote activity with explicit retry policy email_result = yield ctx.call_activity( "send_email", input=notification_payload, app_id="notification-service", retry_policy=wf.RetryPolicy( max_attempts=5, initial_interval=timedelta(seconds=1), backoff_coefficient=2.0 ) )

Reflect on Your Skill

Your dapr-deployment skill should now include multi-app workflow patterns. Test it:

Test Your Skill

text
Using my dapr-deployment skill, design a multi-app workflow architecture. I have three services: order-service, inventory-service, and shipping-service. Show me the workflow code for order-service that calls activities on the other two services, including saga-style compensation.

Try With AI

Prompt 1: Design Cross-Service Orchestration

text
Help me design a multi-app workflow for an e-commerce order fulfillment system. I have these services: order-api, payment-service, and inventory-service. Show me: 1. The workflow code with app_id parameters 2. Retry policies for each cross-service call 3. Compensation logic for failures

Prompt 2: Debug Multi-App Workflow Failures

text
My multi-app workflow is failing with: "Activity 'process_payment' not found on app 'payment-service'". Walk me through debugging this using kubectl and curl to check metadata.

Prompt 3: Scale Multi-App Workflows

text
Under load, my notification-service is overwhelmed by workflow calls. Show me how to configure retry policies to back off appropriately and implement a bulkhead pattern to limit concurrent cross-service calls.

Safety Note: Multi-app workflows create architectural coupling. If participating services use different SDK versions, you may encounter serialization incompatibilities. Always test the full coordination chain in a staging environment before upgrading versions.