In this chapter, we'll explore Computo's advanced capabilities for working with multiple input documents and performing standardized JSON diff/patch operations. These features enable sophisticated data comparison, versioning, and multi-document transformations while maintaining RFC 6902 compliance.
$inputs
System Variable: Working with Multiple DocumentsUp until now, we've been working with single input documents using the $input
variable. Computo now supports processing multiple input files simultaneously through the $inputs
system variable.
The $inputs
variable returns an array containing all input documents provided on the command line:
# Single input (traditional)
computo script.json input1.json
# Multiple inputs (new capability)
computo script.json input1.json input2.json input3.json
// Access all inputs as an array
["$inputs"]
// Access specific inputs by index
["get", ["$inputs"], "/0"] // First input
["get", ["$inputs"], "/1"] // Second input
["get", ["$inputs"], "/2"] // Third input
The familiar $input
variable remains fully supported and is equivalent to accessing the first input:
// These are equivalent:
["$input"]
["get", ["$inputs"], "/0"]
Let's say you have user data from two different systems that need to be merged:
profile1.json:
{
"id": "user123",
"name": "Alice Johnson",
"last_seen": "2024-01-15T10:30:00Z"
}
profile2.json:
{
"id": "user123",
"email": "alice@example.com",
"preferences": {
"theme": "dark",
"notifications": true
}
}
merge_profiles.json:
["let", [
["profile1", ["get", ["$inputs"], "/0"]],
["profile2", ["get", ["$inputs"], "/1"]]
],
["obj",
["user_id", ["get", ["$", "/profile1"], "/id"]],
["name", ["get", ["$", "/profile1"], "/name"]],
["email", ["get", ["$", "/profile2"], "/email"]],
["preferences", ["get", ["$", "/profile2"], "/preferences"]],
["last_seen", ["get", ["$", "/profile1"], "/last_seen"]]
]
]
Usage:
computo --pretty=2 merge_profiles.json profile1.json profile2.json
Output:
{
"user_id": "user123",
"name": "Alice Johnson",
"email": "alice@example.com",
"preferences": {
"theme": "dark",
"notifications": true
},
"last_seen": "2024-01-15T10:30:00Z"
}
Check if documents are consistent across different sources:
validate_consistency.json:
["let", [
["doc1", ["get", ["$inputs"], "/0"]],
["doc2", ["get", ["$inputs"], "/1"]],
["user_id_match", ["==",
["get", ["$", "/doc1"], "/id"],
["get", ["$", "/doc2"], "/id"]
]]
],
["obj",
["documents_consistent", ["$", "/user_id_match"]],
["doc1_id", ["get", ["$", "/doc1"], "/id"]],
["doc2_id", ["get", ["$", "/doc2"], "/id"]],
["total_inputs", ["count", ["$inputs"]]]
]
]
Computo implements RFC 6902 JSON Patch standard, enabling precise document comparison and modification through standardized operations.
diff
OperatorGenerates a JSON Patch array that describes the differences between two documents:
["diff", <original_document>, <modified_document>]
Example:
["diff",
{"status": "active", "id": 123},
{"status": "archived", "id": 123}
]
Output:
[{"op": "replace", "path": "/status", "value": "archived"}]
patch
OperatorApplies a JSON Patch array to a document:
["patch", <document_to_modify>, <patch_array>]
Example:
["patch",
{"status": "active", "id": 123},
[{"op": "replace", "path": "/status", "value": "archived"}]
]
Output:
{"status": "archived", "id": 123}
Let's walk through a complete workflow that demonstrates document versioning and change management.
archive_user.json:
["obj",
["id", ["get", ["$input"], "/id"]],
["name", ["get", ["$input"], "/name"]],
["status", "archived"],
["archived_at", "2024-01-15T12:00:00Z"]
]
original_user.json:
{
"id": 123,
"name": "Alice",
"status": "active"
}
Generate the patch:
computo --diff archive_user.json original_user.json > archive_patch.json
archive_patch.json (generated):
[
{"op": "replace", "path": "/status", "value": "archived"},
{"op": "add", "path": "/archived_at", "value": "2024-01-15T12:00:00Z"}
]
apply_patch.json:
["patch",
["get", ["$inputs"], "/0"],
["get", ["$inputs"], "/1"]
]
computo --pretty=2 apply_patch.json original_user.json archive_patch.json
Output:
{
"id": 123,
"name": "Alice",
"status": "archived",
"archived_at": "2024-01-15T12:00:00Z"
}
Compare configurations between environments and generate sync patches:
sync_configs.json:
["let", [
["prod_config", ["get", ["$inputs"], "/0"]],
["staging_config", ["get", ["$inputs"], "/1"]],
["sync_patch", ["diff", ["$", "/prod_config"], ["$", "/staging_config"]]]
],
["obj",
["requires_sync", [">", ["count", ["$", "/sync_patch"]], 0]],
["patch_operations", ["$", "/sync_patch"]],
["staging_after_sync", ["if",
["$", "/requires_sync"],
["patch", ["$", "/staging_config"], ["$", "/sync_patch"]],
["$", "/staging_config"]
]]
]
]
Create rollback patches by reversing the diff direction:
generate_rollback.json:
// Generate rollback patch (reverse diff)
["diff",
["get", ["$inputs"], "/1"], // new version
["get", ["$inputs"], "/0"] // original version
]
// This creates a patch that rolls back from new to original
Combine data from multiple APIs or databases:
aggregate_user_data.json:
["let", [
["profile_service", ["get", ["$inputs"], "/0"]],
["settings_service", ["get", ["$inputs"], "/1"]],
["activity_service", ["get", ["$inputs"], "/2"]]
],
["obj",
["user_id", ["get", ["$", "/profile_service"], "/id"]],
["basic_info", ["obj",
["name", ["get", ["$", "/profile_service"], "/name"]],
["email", ["get", ["$", "/profile_service"], "/email"]]
]],
["preferences", ["get", ["$", "/settings_service"], "/preferences"]],
["recent_activity", ["get", ["$", "/activity_service"], "/last_actions"]],
["last_updated", ["get", ["$", "/activity_service"], "/timestamp"]]
]
]
When working with patch operations, be aware of potential failures:
test
operations that don't match expected valuesUse conditional logic to handle potential patch failures gracefully:
safe_patch_apply.json:
["let", [
["document", ["get", ["$inputs"], "/0"]],
["patch_ops", ["get", ["$inputs"], "/1"]],
["patch_count", ["count", ["$", "/patch_ops"]]]
],
["obj",
["original_document", ["$", "/document"]],
["patch_operations", ["$", "/patch_ops"]],
["patch_safe", ["==", ["$", "/patch_count"], 1]],
["result", ["if",
["$", "/patch_safe"],
["patch", ["$", "/document"], ["$", "/patch_ops"]],
["$", "/document"]
]]
]
]
--diff
FlagGenerate patches directly from transformations without modifying your scripts:
# Traditional transformation
computo transform.json input.json
# Generate patch from same transformation
computo --diff transform.json input.json
This is particularly useful for: - Version control integration - Generate patches for change tracking - Automated deployment - Create configuration update patches - Data migration planning - Preview changes before applying them
Remember that --diff
only works with a single input file:
# Valid: Single input with --diff
computo --diff script.json input.json
# Invalid: Multiple inputs with --diff
computo --diff script.json input1.json input2.json
Computo includes functional programming operators car
and cdr
inspired by Lisp, which provide elegant ways to work with arrays and multiple inputs.
// car: Get the first element
["car", {"array": [1, 2, 3, 4]}]
// Result: 1
// cdr: Get everything except the first element
["cdr", {"array": [1, 2, 3, 4]}]
// Result: [2, 3, 4]
// Composition: Get the second element
["car", ["cdr", {"array": [1, 2, 3, 4]}]]
// Result: 2
The car
and cdr
operators are particularly powerful for processing multiple inputs in a functional style:
Traditional approach:
["let", [
["initial", ["get", ["$inputs"], "/0"]],
["patch1", ["get", ["$inputs"], "/1"]],
["patch2", ["get", ["$inputs"], "/2"]]
],
["patch", ["patch", ["$", "/initial"], ["$", "/patch1"]], ["$", "/patch2"]]
]
Functional approach with car/cdr:
["reduce",
["cdr", ["$inputs"]], // All patches (skip first input)
["lambda", ["state", "patch"],
["patch", ["$", "/state"], ["$", "/patch"]]
],
["car", ["$inputs"]] // Initial state (first input)
]
Benefits of the functional approach: - Works with any number of patches (not just 2) - More readable and declarative - Follows functional programming principles - Easier to test and reason about
// Process conversation updates using functional list operations
["let", [
["initial_conversation", ["car", ["$inputs"]]], // First input
["all_patches", ["cdr", ["$inputs"]]], // Remaining inputs
["final_state", ["reduce",
["$", "/all_patches"],
["lambda", ["conversation", "patch"],
["patch", ["$", "/conversation"], ["$", "/patch"]]
],
["$", "/initial_conversation"]
]],
["patch_count", ["count", ["$", "/all_patches"]]]
],
["obj",
["conversation_id", ["get", ["$", "/final_state"], "/id"]],
["message_count", ["count", ["get", ["$", "/final_state"], "/messages"]]],
["patches_applied", ["$", "/patch_count"]],
["final_conversation", ["$", "/final_state"]]
]
]
Usage:
computo --pretty=2 conversation_processor.json initial_conversation.json patch1.json patch2.json patch3.json
Beyond basic list processing, Computo provides powerful operators for constructing and manipulating arrays in sophisticated ways. These operations complement the functional car
and cdr
operators.
cons
Operator: List BuildingThe cons
operator prepends an item to the beginning of an array, following functional programming conventions:
["cons", <item>, <array>]
Basic usage:
["cons", "first", {"array": [2, 3, 4]}]
// Result: ["first", 2, 3, 4]
Building lists incrementally:
// Start with empty array and build a list
["cons", 1,
["cons", 2,
["cons", 3, {"array": []}]
]
]
// Result: [1, 2, 3]
Practical example - Adding metadata to processing results:
["let", [
["processing_results", ["map",
["get", ["$input"], "/user_data"],
["lambda", ["user"], ["obj",
["id", ["get", ["$", "/user"], "/id"]],
["processed", true]
]]
]],
["timestamp", "2024-01-15T12:00:00Z"]
],
["cons",
["obj", ["processing_metadata", ["$", "/timestamp"]]],
["$", "/processing_results"]
]
]
append
Operator: Array ConcatenationThe append
operator concatenates multiple arrays into a single array:
["append", <array1>, <array2>, <array3>, ...]
Basic concatenation:
["append",
{"array": [1, 2]},
{"array": [3, 4]},
{"array": [5]}
]
// Result: [1, 2, 3, 4, 5]
Combining data from multiple sources:
["let", [
["primary_users", ["get", ["$inputs"], "/0/users"]],
["backup_users", ["get", ["$inputs"], "/1/users"]],
["temp_users", ["get", ["$inputs"], "/2/users"]]
],
["obj",
["all_users", ["append",
["$", "/primary_users"],
["$", "/backup_users"],
["$", "/temp_users"]
]],
["total_count", ["count", ["append",
["$", "/primary_users"],
["$", "/backup_users"],
["$", "/temp_users"]
]]]
]
]
Real-world example - Aggregating log entries:
// Combine log entries from multiple services
["let", [
["web_logs", ["get", ["$inputs"], "/0/entries"]],
["api_logs", ["get", ["$inputs"], "/1/entries"]],
["db_logs", ["get", ["$inputs"], "/2/entries"]]
],
["obj",
["combined_logs", ["append",
["$", "/web_logs"],
["$", "/api_logs"],
["$", "/db_logs"]
]],
["log_sources", {"array": ["web", "api", "database"]}],
["total_entries", ["count", ["append",
["$", "/web_logs"],
["$", "/api_logs"],
["$", "/db_logs"]
]]]
]
]
chunk
Operator: Batch ProcessingThe chunk
operator splits an array into smaller arrays of a specified size, perfect for batch processing:
["chunk", <array>, <size>]
Basic chunking:
["chunk", {"array": [1, 2, 3, 4, 5, 6, 7]}, 3]
// Result: [[1, 2, 3], [4, 5, 6], [7]]
Processing data in batches:
["let", [
["all_users", ["get", ["$input"], "/users"]],
["batch_size", 50],
["user_batches", ["chunk", ["$", "/all_users"], ["$", "/batch_size"]]]
],
["obj",
["total_users", ["count", ["$", "/all_users"]]],
["batch_count", ["count", ["$", "/user_batches"]]],
["batch_size", ["$", "/batch_size"]],
["batches", ["$", "/user_batches"]]
]
]
Real-world example - Email campaign processing:
// Prepare email lists for batch sending
["let", [
["subscriber_list", ["get", ["$input"], "/subscribers"]],
["batch_size", 100],
["email_batches", ["chunk", ["$", "/subscriber_list"], ["$", "/batch_size"]]]
],
["obj",
["campaign_id", ["get", ["$input"], "/campaign_id"]],
["total_subscribers", ["count", ["$", "/subscriber_list"]]],
["email_batches", ["map",
["$", "/email_batches"],
["lambda", ["batch"], ["obj",
["batch_size", ["count", ["$", "/batch"]]],
["recipients", ["$", "/batch"]]
]]
]],
["estimated_send_time_minutes", ["/",
["count", ["$", "/email_batches"]],
2
]]
]
]
partition
Operator: Conditional SplittingThe partition
operator splits an array into two groups based on a predicate function:
["partition", <array>, <lambda_predicate>]
Returns: [<truthy_items>, <falsy_items>]
Basic partitioning:
["partition",
{"array": [1, 2, 3, 4, 5, 6]},
["lambda", ["x"], [">", ["$", "/x"], 3]]
]
// Result: [[4, 5, 6], [1, 2, 3]]
Separating valid and invalid records:
["let", [
["user_records", ["get", ["$input"], "/users"]],
["partitioned", ["partition",
["$", "/user_records"],
["lambda", ["user"], ["&&",
["!=", ["get", ["$", "/user"], "/email"], null],
["!=", ["get", ["$", "/user"], "/name"], ""]
]]
]]
],
["obj",
["valid_users", ["car", ["$", "/partitioned"]]],
["invalid_users", ["car", ["cdr", ["$", "/partitioned"]]]],
["valid_count", ["count", ["car", ["$", "/partitioned"]]]],
["invalid_count", ["count", ["car", ["cdr", ["$", "/partitioned"]]]]],
["validation_summary", ["obj",
["total_processed", ["count", ["$", "/user_records"]]],
["pass_rate", ["/",
["count", ["car", ["$", "/partitioned"]]],
["count", ["$", "/user_records"]]
]]
]]
]
]
Real-world example - Order processing:
// Separate urgent and standard orders for different processing queues
["let", [
["all_orders", ["get", ["$input"], "/orders"]],
["partitioned_orders", ["partition",
["$", "/all_orders"],
["lambda", ["order"], ["||",
["==", ["get", ["$", "/order"], "/priority"], "urgent"],
[">", ["get", ["$", "/order"], "/amount"], 1000]
]]
]]
],
["obj",
["urgent_orders", ["car", ["$", "/partitioned_orders"]]],
["standard_orders", ["car", ["cdr", ["$", "/partitioned_orders"]]]],
["processing_queues", ["obj",
["urgent_queue", ["obj",
["orders", ["car", ["$", "/partitioned_orders"]]],
["count", ["count", ["car", ["$", "/partitioned_orders"]]]],
["estimated_processing_hours", 2]
]],
["standard_queue", ["obj",
["orders", ["car", ["cdr", ["$", "/partitioned_orders"]]]],
["count", ["count", ["car", ["cdr", ["$", "/partitioned_orders"]]]]],
["estimated_processing_hours", 24]
]]
]]
]
]
These operators work beautifully together for sophisticated data processing workflows:
Example: Processing survey responses in batches by category:
["let", [
["all_responses", ["get", ["$input"], "/survey_responses"]],
// First partition by satisfaction level
["satisfaction_split", ["partition",
["$", "/all_responses"],
["lambda", ["response"], [">", ["get", ["$", "/response"], "/satisfaction"], 7]]
]],
["positive_responses", ["car", ["$", "/satisfaction_split"]]],
["negative_responses", ["car", ["cdr", ["$", "/satisfaction_split"]]]],
// Then chunk positive responses for follow-up campaigns
["positive_batches", ["chunk", ["$", "/positive_responses"], 25]],
// And chunk negative responses for support outreach
["negative_batches", ["chunk", ["$", "/negative_responses"], 10]]
],
["obj",
["processing_summary", ["obj",
["total_responses", ["count", ["$", "/all_responses"]]],
["positive_count", ["count", ["$", "/positive_responses"]]],
["negative_count", ["count", ["$", "/negative_responses"]]]
]],
["follow_up_campaigns", ["map",
["$", "/positive_batches"],
["lambda", ["batch"], ["obj",
["type", "testimonial_request"],
["recipients", ["$", "/batch"]],
["batch_size", ["count", ["$", "/batch"]]]
]]
]],
["support_outreach", ["map",
["$", "/negative_batches"],
["lambda", ["batch"], ["obj",
["type", "customer_support"],
["priority", "high"],
["recipients", ["$", "/batch"]],
["batch_size", ["count", ["$", "/batch"]]]
]]
]]
]
]
let
bindings["count", ["$inputs"]]
when expectedIn this chapter, you learned:
$inputs
system variable$input
(first document) and $inputs
(all documents)diff
operatorpatch
operator car
and cdr
operators for elegant array manipulationcons
for prepending items to arraysappend
for combining multiple data sourceschunk
for splitting arrays into manageable sizespartition
for separating data based on predicatesThese features enable Computo to handle complex scenarios involving document comparison, versioning, configuration management, and multi-source data processing while maintaining RFC 6902 compliance for interoperability with other JSON Patch tools.
In the next chapter, we'll explore performance optimization techniques and best practices for production deployments.