Compare commits
79 Commits
v0.1.2
...
split_smar
| Author | SHA1 | Date | |
|---|---|---|---|
| 3e1c8d563e | |||
| 1299febcdc | |||
| be94c62760 | |||
| 6a862ef243 | |||
| ae2de5fc62 | |||
| df0bbc7327 | |||
| d94761c866 | |||
| f8235e1a59 | |||
| 647cadf497 | |||
| 8c793a81b6 | |||
| 6a42ba7e43 | |||
| 14b3790251 | |||
| 61d81bed62 | |||
| 1a10bc1a5f | |||
| 7f68d08134 | |||
| ab20cd896f | |||
| 5a9e93d6e7 | |||
| b51641dc7e | |||
| 45f1257896 | |||
| 3e2b8b1e3a | |||
| 90d81617ef | |||
| 64c62e616b | |||
| 2c340e37c7 | |||
| 7853e94d2e | |||
| 99bf57b154 | |||
| 0fa6eaf95b | |||
| 76f42be740 | |||
| d99dc41be9 | |||
| 263508b8f7 | |||
| 0c2cca30ed | |||
| 46fdf668c6 | |||
| f8a92a45a0 | |||
| cec70e6036 | |||
| f9e08ba628 | |||
| c12a078149 | |||
| dedd803dc3 | |||
| e8e927a491 | |||
| d950bbac23 | |||
| fc8da2ebf5 | |||
| f6e50c405f | |||
| c06f508e8f | |||
| 97bf1e47f4 | |||
| ef47fddd56 | |||
| 896dd84d2a | |||
| def75d8f86 | |||
| 69f2173f75 | |||
| 075d355c58 | |||
| 0de9725ba8 | |||
| 6dcccc903f | |||
| 507b4951b4 | |||
| a064be0e5c | |||
| 8a35f1d4dc | |||
| 9e5ee61785 | |||
| 4b5b5d6ed8 | |||
| 3f45052193 | |||
| 7dc7ab67e4 | |||
| e7c5e5f77f | |||
| 4e32a958ea | |||
| a260def38d | |||
| 782a935d3d | |||
| 3fbdabc874 | |||
| 7386f8ed0b | |||
| 51e494c48b | |||
| 9ea9d55eee | |||
| 8c106464fd | |||
| 7433c147c9 | |||
| 9c4a9ea1e5 | |||
| 82804c6803 | |||
| 483caab54c | |||
| a9821b1ae6 | |||
| 0744642985 | |||
| 1d5c6f3348 | |||
| ad87934abf | |||
| 6b49fa68c0 | |||
| f0df169689 | |||
| d9fd7a61bb | |||
| 897f717da5 | |||
| 51e1a065ad | |||
| e7f50e899d |
@@ -12,3 +12,42 @@ Role: Principal Systems Architect & Lead Software Engineer.Objective: Implement
|
||||
|
||||
|
||||
|
||||
Create a walkthrough for Julia service-A service sending a mix-content chat message to Julia service-B. the chat message must includes
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
I updated the following:
|
||||
- NATSBridge.jl. Essentially I add NATS_connection keyword and new publish_message function to support the keyword.
|
||||
|
||||
Use them and ONLY them as ground truth.
|
||||
|
||||
Then update the following files accordingly:
|
||||
- architecture.md
|
||||
- implementation.md
|
||||
|
||||
All API should be semantically consistent and naming should be consistent across the board.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Task: Update NATSBridge.js to reflect recent changes in NATSBridge.jl and docs
|
||||
|
||||
Context: NATSBridge.jl and docs has been updated.
|
||||
|
||||
Requirements:
|
||||
|
||||
Source of Truth: Treat the updated NATSBridge.jl and docs as the definitive source.
|
||||
API Consistency: Ensure the Main Package API (e.g., smartsend(), publish_message()) uses consistent naming across all three supported languages.
|
||||
Ecosystem Variance: Low-level native functions (e.g., NATS.connect(), JSON.read()) should follow the conventions of the specific language ecosystem and do not require cross-language consistency.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
# This file is machine-generated - editing it directly is not advised
|
||||
|
||||
julia_version = "1.12.4"
|
||||
julia_version = "1.12.5"
|
||||
manifest_format = "2.0"
|
||||
project_hash = "be1e3c2d8b7f4f0ee7375c94aaf704ce73ba57b9"
|
||||
project_hash = "b632f853bcf5355f5c53ad3efa7a19f70444dc6c"
|
||||
|
||||
[[deps.AliasTables]]
|
||||
deps = ["PtrArrays", "Random"]
|
||||
@@ -436,6 +436,12 @@ git-tree-sha1 = "d9d9a189fb9155a460e6b5e8966bf6a66737abf8"
|
||||
uuid = "55e73f9c-eeeb-467f-b4cc-a633fde63d2a"
|
||||
version = "0.1.0"
|
||||
|
||||
[[deps.NATSBridge]]
|
||||
deps = ["Arrow", "DataFrames", "Dates", "GeneralUtils", "HTTP", "JSON", "NATS", "PrettyPrinting", "Revise", "UUIDs"]
|
||||
path = "."
|
||||
uuid = "f2724d33-f338-4a57-b9f8-1be882570d10"
|
||||
version = "0.4.1"
|
||||
|
||||
[[deps.NanoDates]]
|
||||
deps = ["Dates", "Parsers"]
|
||||
git-tree-sha1 = "850a0557ae5934f6e67ac0dc5ca13d0328422d1f"
|
||||
|
||||
@@ -1,194 +0,0 @@
|
||||
### API
|
||||
Plik server expose a REST-full API to manage uploads and get files :
|
||||
|
||||
Get and create upload :
|
||||
|
||||
- **POST** /upload
|
||||
- Params (json object in request body) :
|
||||
- oneshot (bool)
|
||||
- stream (bool)
|
||||
- removable (bool)
|
||||
- ttl (int)
|
||||
- login (string)
|
||||
- password (string)
|
||||
- files (see below)
|
||||
- Return :
|
||||
JSON formatted upload object.
|
||||
Important fields :
|
||||
- id (required to upload files)
|
||||
- uploadToken (required to upload/remove files)
|
||||
- files (see below)
|
||||
|
||||
For stream mode you need to know the file id before the upload starts as it will block.
|
||||
File size and/or file type also need to be known before the upload starts as they have to be printed
|
||||
in HTTP response headers.
|
||||
To get the file ids pass a "files" json object with each file you are about to upload.
|
||||
Fill the reference field with an arbitrary string to avoid matching file ids using the fileName field.
|
||||
This is also used to notify of MISSING files when file upload is not yet finished or has failed.
|
||||
```
|
||||
"files" : [
|
||||
{
|
||||
"fileName": "file.txt",
|
||||
"fileSize": 12345,
|
||||
"fileType": "text/plain",
|
||||
"reference": "0"
|
||||
},...
|
||||
]
|
||||
```
|
||||
|
||||
- **GET** /upload/:uploadid:
|
||||
- Get upload metadata (files list, upload date, ttl,...)
|
||||
|
||||
Upload file :
|
||||
|
||||
- **POST** /$mode/:uploadid:/:fileid:/:filename:
|
||||
- Request body must be a multipart request with a part named "file" containing file data.
|
||||
|
||||
- **POST** /file/:uploadid:
|
||||
- Same as above without passing file id, won't work for stream mode.
|
||||
|
||||
- **POST** /:
|
||||
- Quick mode, automatically create an upload with default parameters and add the file to it.
|
||||
|
||||
Get file :
|
||||
|
||||
- **HEAD** /$mode/:uploadid:/:fileid:/:filename:
|
||||
- Returns only HTTP headers. Useful to know Content-Type and Content-Length without downloading the file. Especially if upload has OneShot option enabled.
|
||||
|
||||
- **GET** /$mode/:uploadid:/:fileid:/:filename:
|
||||
- Download file. Filename **MUST** match. A browser, might try to display the file if it's a jpeg for example. You may try to force download with ?dl=1 in url.
|
||||
|
||||
- **GET** /archive/:uploadid:/:filename:
|
||||
- Download uploaded files in a zip archive. :filename: must end with .zip
|
||||
|
||||
Remove file :
|
||||
|
||||
- **DELETE** /$mode/:uploadid:/:fileid:/:filename:
|
||||
- Delete file. Upload **MUST** have "removable" option enabled.
|
||||
|
||||
Show server details :
|
||||
|
||||
- **GET** /version
|
||||
- Show plik server version, and some build information (build host, date, git revision,...)
|
||||
|
||||
- **GET** /config
|
||||
- Show plik server configuration (ttl values, max file size, ...)
|
||||
|
||||
- **GET** /stats
|
||||
- Get server statistics ( upload/file count, user count, total size used )
|
||||
- Admin only
|
||||
|
||||
User authentication :
|
||||
|
||||
-
|
||||
Plik can authenticate users using Google and/or OVH third-party API.
|
||||
The /auth API is designed for the Plik web application nevertheless if you want to automatize it be sure to provide a valid
|
||||
Referrer HTTP header and forward all session cookies.
|
||||
Plik session cookies have the "secure" flag set, so they can only be transmitted over secure HTTPS connections.
|
||||
To avoid CSRF attacks the value of the plik-xsrf cookie MUST be copied in the X-XSRFToken HTTP header of each
|
||||
authenticated request.
|
||||
Once authenticated a user can generate upload tokens. Those tokens can be used in the X-PlikToken HTTP header used to link
|
||||
an upload to the user account. It can be put in the ~/.plikrc file of the Plik command line client.
|
||||
|
||||
- **Local** :
|
||||
- You'll need to create users using the server command line
|
||||
|
||||
- **Google** :
|
||||
- You'll need to create a new application in the [Google Developper Console](https://console.developers.google.com)
|
||||
- You'll be handed a Google API ClientID and a Google API ClientSecret that you'll need to put in the plikd.cfg file
|
||||
- Do not forget to whitelist valid origin and redirect url ( https://yourdomain/auth/google/callback ) for your domain
|
||||
|
||||
- **OVH** :
|
||||
- You'll need to create a new application in the OVH API : https://eu.api.ovh.com/createApp/
|
||||
- You'll be handed an OVH application key and an OVH application secret key that you'll need to put in the plikd.cfg file
|
||||
|
||||
- **GET** /auth/google/login
|
||||
- Get Google user consent URL. User have to visit this URL to authenticate
|
||||
|
||||
- **GET** /auth/google/callback
|
||||
- Callback of the user consent dialog
|
||||
- The user will be redirected back to the web application with a Plik session cookie at the end of this call
|
||||
|
||||
- **GET** /auth/ovh/login
|
||||
- Get OVH user consent URL. User have to visit this URL to authenticate
|
||||
- The response will contain a temporary session cookie to forward the API endpoint and OVH consumer key to the callback
|
||||
|
||||
- **GET** /auth/ovh/callback
|
||||
- Callback of the user consent dialog.
|
||||
- The user will be redirected back to the web application with a Plik session cookie at the end of this call
|
||||
|
||||
- **POST** /auth/local/login
|
||||
- Params :
|
||||
- login : user login
|
||||
- password : user password
|
||||
|
||||
- **GET** /auth/logout
|
||||
- Invalidate Plik session cookies
|
||||
|
||||
- **GET** /me
|
||||
- Return basic user info ( ID, name, email ) and tokens
|
||||
|
||||
- **DELETE** /me
|
||||
- Remove user account.
|
||||
|
||||
- **GET** /me/token
|
||||
- List user tokens
|
||||
- This call use pagination
|
||||
|
||||
- **POST** /me/token
|
||||
- Create a new upload token
|
||||
- A comment can be passed in the json body
|
||||
|
||||
- **DELETE** /me/token/{token}
|
||||
- Revoke an upload token
|
||||
|
||||
- **GET** /me/uploads
|
||||
- List user uploads
|
||||
- Params :
|
||||
- token : filter by token
|
||||
- This call use pagination
|
||||
|
||||
- **DELETE** /me/uploads
|
||||
- Remove all uploads linked to a user account
|
||||
- Params :
|
||||
- token : filter by token
|
||||
|
||||
- **GET** /me/stats
|
||||
- Get user statistics ( upload/file count, total size used )
|
||||
|
||||
- **GET** /users
|
||||
- List all users
|
||||
- This call use pagination
|
||||
- Admin only
|
||||
|
||||
QRCode :
|
||||
|
||||
- **GET** /qrcode
|
||||
- Generate a QRCode image from an url
|
||||
- Params :
|
||||
- url : The url you want to store in the QRCode
|
||||
- size : The size of the generated image in pixels (default: 250, max: 1000)
|
||||
|
||||
|
||||
$mode can be "file" or "stream" depending if stream mode is enabled. See FAQ for more details.
|
||||
|
||||
Examples :
|
||||
```sh
|
||||
Create an upload (in the json response, you'll have upload id and upload token)
|
||||
$ curl -X POST http://127.0.0.1:8080/upload
|
||||
|
||||
Create a OneShot upload
|
||||
$ curl -X POST -d '{ "OneShot" : true }' http://127.0.0.1:8080/upload
|
||||
|
||||
Upload a file to upload
|
||||
$ curl -X POST --header "X-UploadToken: M9PJftiApG1Kqr81gN3Fq1HJItPENMhl" -F "file=@test.txt" http://127.0.0.1:8080/file/IsrIPIsDskFpN12E
|
||||
|
||||
Get headers
|
||||
$ curl -I http://127.0.0.1:8080/file/IsrIPIsDskFpN12E/sFjIeokH23M35tN4/test.txt
|
||||
HTTP/1.1 200 OK
|
||||
Content-Disposition: filename=test.txt
|
||||
Content-Length: 3486
|
||||
Content-Type: text/plain; charset=utf-8
|
||||
Date: Fri, 15 May 2015 09:16:20 GMT
|
||||
|
||||
```
|
||||
13
Project.toml
13
Project.toml
@@ -1,8 +1,21 @@
|
||||
name = "NATSBridge"
|
||||
uuid = "f2724d33-f338-4a57-b9f8-1be882570d10"
|
||||
version = "0.4.3"
|
||||
authors = ["narawat <narawat@gmail.com>"]
|
||||
|
||||
[deps]
|
||||
Arrow = "69666777-d1a9-59fb-9406-91d4454c9d45"
|
||||
Base64 = "2a0f44e3-6c83-55bd-87e4-b1978d98bd5f"
|
||||
DataFrames = "a93c6f00-e57d-5684-b7b6-d8193f3e46c0"
|
||||
Dates = "ade2ca70-3891-5945-98fb-dc099432e06a"
|
||||
GeneralUtils = "c6c72f09-b708-4ac8-ac7c-2084d70108fe"
|
||||
HTTP = "cd3eb016-35fb-5094-929b-558a96fad6f3"
|
||||
JSON = "682c06a0-de6a-54ab-a142-c8b1cf79cde6"
|
||||
NATS = "55e73f9c-eeeb-467f-b4cc-a633fde63d2a"
|
||||
PrettyPrinting = "54e16d92-306c-5ea0-a30b-337be88ac337"
|
||||
Revise = "295af30f-e4ad-537b-8983-00126c2a3abe"
|
||||
UUIDs = "cf7118a7-6976-5b1a-9a39-7adc72f591a4"
|
||||
|
||||
[compat]
|
||||
Base64 = "1.11.0"
|
||||
JSON = "1.4.0"
|
||||
|
||||
@@ -1,321 +0,0 @@
|
||||
# Implementation Guide: Bi-Directional Data Bridge
|
||||
|
||||
## Overview
|
||||
|
||||
This document describes the implementation of the high-performance, bi-directional data bridge between Julia and JavaScript services using NATS (Core & JetStream), implementing the Claim-Check pattern for large payloads.
|
||||
|
||||
## Architecture
|
||||
|
||||
The implementation follows the Claim-Check pattern:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ SmartSend Function │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ Is payload size < 1MB? │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
┌─────────────────┴─────────────────┐
|
||||
▼ ▼
|
||||
┌─────────────────┐ ┌─────────────────┐
|
||||
│ Direct Path │ │ Link Path │
|
||||
│ (< 1MB) │ │ (> 1MB) │
|
||||
│ │ │ │
|
||||
│ • Serialize to │ │ • Serialize to │
|
||||
│ IOBuffer │ │ IOBuffer │
|
||||
│ • Base64 encode │ │ • Upload to │
|
||||
│ • Publish to │ │ HTTP Server │
|
||||
│ NATS │ │ • Publish to │
|
||||
│ │ │ NATS with URL │
|
||||
└─────────────────┘ └─────────────────┘
|
||||
```
|
||||
|
||||
## Files
|
||||
|
||||
### Julia Module: [`src/julia_bridge.jl`](../src/julia_bridge.jl)
|
||||
|
||||
The Julia implementation provides:
|
||||
|
||||
- **[`MessageEnvelope`](../src/julia_bridge.jl)**: Struct for the unified JSON envelope
|
||||
- **[`SmartSend()`](../src/julia_bridge.jl)**: Handles transport selection based on payload size
|
||||
- **[`SmartReceive()`](../src/julia_bridge.jl)**: Handles both direct and link transport
|
||||
|
||||
### JavaScript Module: [`src/js_bridge.js`](../src/js_bridge.js)
|
||||
|
||||
The JavaScript implementation provides:
|
||||
|
||||
- **`MessageEnvelope` class**: For the unified JSON envelope
|
||||
- **[`SmartSend()`](../src/js_bridge.js)**: Handles transport selection based on payload size
|
||||
- **[`SmartReceive()`](../src/js_bridge.js)**: Handles both direct and link transport
|
||||
|
||||
## Installation
|
||||
|
||||
### Julia Dependencies
|
||||
|
||||
```julia
|
||||
using Pkg
|
||||
Pkg.add("NATS")
|
||||
Pkg.add("Arrow")
|
||||
Pkg.add("JSON3")
|
||||
Pkg.add("HTTP")
|
||||
Pkg.add("UUIDs")
|
||||
Pkg.add("Dates")
|
||||
```
|
||||
|
||||
### JavaScript Dependencies
|
||||
|
||||
```bash
|
||||
npm install nats.js apache-arrow uuid base64-url
|
||||
```
|
||||
|
||||
## Usage Tutorial
|
||||
|
||||
### Step 1: Start NATS Server
|
||||
|
||||
```bash
|
||||
docker run -p 4222:4222 nats:latest
|
||||
```
|
||||
|
||||
### Step 2: Start HTTP File Server (optional)
|
||||
|
||||
```bash
|
||||
# Create a directory for file uploads
|
||||
mkdir -p /tmp/fileserver
|
||||
|
||||
# Use any HTTP server that supports POST for file uploads
|
||||
# Example: Python's built-in server
|
||||
python3 -m http.server 8080 --directory /tmp/fileserver
|
||||
```
|
||||
|
||||
### Step 3: Run Test Scenarios
|
||||
|
||||
```bash
|
||||
# Scenario 1: Command & Control (JavaScript sender)
|
||||
node test/scenario1_command_control.js
|
||||
|
||||
# Scenario 2: Large Arrow Table (JavaScript sender)
|
||||
node test/scenario2_large_table.js
|
||||
|
||||
# Scenario 3: Julia-to-Julia communication
|
||||
# Run both Julia and JavaScript versions
|
||||
julia test/scenario3_julia_to_julia.jl
|
||||
node test/scenario3_julia_to_julia.js
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Scenario 1: Command & Control (Small JSON)
|
||||
|
||||
#### JavaScript (Sender)
|
||||
```javascript
|
||||
const { SmartSend } = require('./js_bridge');
|
||||
|
||||
const config = {
|
||||
step_size: 0.01,
|
||||
iterations: 1000
|
||||
};
|
||||
|
||||
await SmartSend("control", config, "json", {
|
||||
correlationId: "unique-id"
|
||||
});
|
||||
```
|
||||
|
||||
#### Julia (Receiver)
|
||||
```julia
|
||||
using NATS
|
||||
using JSON3
|
||||
|
||||
# Subscribe to control subject
|
||||
subscribe(nats, "control") do msg
|
||||
env = MessageEnvelope(String(msg.data))
|
||||
config = JSON3.read(env.payload)
|
||||
|
||||
# Execute simulation with parameters
|
||||
step_size = config.step_size
|
||||
iterations = config.iterations
|
||||
|
||||
# Send acknowledgment
|
||||
response = Dict("status" => "Running", "correlation_id" => env.correlation_id)
|
||||
publish(nats, "control_response", JSON3.stringify(response))
|
||||
end
|
||||
```
|
||||
|
||||
### Scenario 2: Deep Dive Analysis (Large Arrow Table)
|
||||
|
||||
#### Julia (Sender)
|
||||
```julia
|
||||
using Arrow
|
||||
using DataFrames
|
||||
|
||||
# Create large DataFrame
|
||||
df = DataFrame(
|
||||
id = 1:10_000_000,
|
||||
value = rand(10_000_000),
|
||||
category = rand(["A", "B", "C"], 10_000_000)
|
||||
)
|
||||
|
||||
# Send via SmartSend with type="table"
|
||||
await SmartSend("analysis_results", df, "table");
|
||||
```
|
||||
|
||||
#### JavaScript (Receiver)
|
||||
```javascript
|
||||
const { SmartReceive } = require('./js_bridge');
|
||||
|
||||
const result = await SmartReceive(msg);
|
||||
|
||||
// Use table data for visualization with Perspective.js or D3
|
||||
const table = result.data;
|
||||
```
|
||||
|
||||
### Scenario 3: Live Binary Processing
|
||||
|
||||
#### JavaScript (Sender)
|
||||
```javascript
|
||||
const { SmartSend } = require('./js_bridge');
|
||||
|
||||
// Capture binary chunk
|
||||
const binaryData = await navigator.mediaDevices.getUserMedia({ binary: true });
|
||||
|
||||
await SmartSend("binary_input", binaryData, "binary", {
|
||||
metadata: {
|
||||
sample_rate: 44100,
|
||||
channels: 1
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
#### Julia (Receiver)
|
||||
```julia
|
||||
using WAV
|
||||
using DSP
|
||||
|
||||
# Receive binary data
|
||||
function process_binary(data)
|
||||
# Perform FFT or AI transcription
|
||||
spectrum = fft(data)
|
||||
|
||||
# Send results back (JSON + Arrow table)
|
||||
results = Dict("transcription" => "sample text", "spectrum" => spectrum)
|
||||
await SmartSend("binary_output", results, "json")
|
||||
end
|
||||
```
|
||||
|
||||
### Scenario 4: Catch-Up (JetStream)
|
||||
|
||||
#### Julia (Producer)
|
||||
```julia
|
||||
using NATS
|
||||
|
||||
function publish_health_status(nats)
|
||||
jetstream = JetStream(nats, "health_updates")
|
||||
|
||||
while true
|
||||
status = Dict("cpu" => rand(), "memory" => rand())
|
||||
publish(jetstream, "health", status)
|
||||
sleep(5) # Every 5 seconds
|
||||
end
|
||||
end
|
||||
```
|
||||
|
||||
#### JavaScript (Consumer)
|
||||
```javascript
|
||||
const { connect } = require('nats');
|
||||
|
||||
const nc = await connect({ servers: ['nats://localhost:4222'] });
|
||||
const js = nc.jetstream();
|
||||
|
||||
// Request replay from last 10 minutes
|
||||
const consumer = await js.pullSubscribe("health", {
|
||||
durable_name: "catchup",
|
||||
max_batch: 100,
|
||||
max_ack_wait: 30000
|
||||
});
|
||||
|
||||
// Process historical and real-time messages
|
||||
for await (const msg of consumer) {
|
||||
const result = await SmartReceive(msg);
|
||||
// Process the data
|
||||
msg.ack();
|
||||
}
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
| Variable | Default | Description |
|
||||
|----------|---------|-------------|
|
||||
| `NATS_URL` | `nats://localhost:4222` | NATS server URL |
|
||||
| `FILESERVER_URL` | `http://localhost:8080/upload` | HTTP file server URL |
|
||||
| `SIZE_THRESHOLD` | `1_000_000` | Size threshold in bytes (1MB) |
|
||||
|
||||
### Message Envelope Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"correlation_id": "uuid-v4-string",
|
||||
"type": "json|table|binary",
|
||||
"transport": "direct|link",
|
||||
"payload": "base64-encoded-string", // Only if transport=direct
|
||||
"url": "http://fileserver/path/to/data", // Only if transport=link
|
||||
"metadata": {
|
||||
"content_type": "application/octet-stream",
|
||||
"content_length": 123456,
|
||||
"format": "arrow_ipc_stream"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Zero-Copy Reading
|
||||
- Use Arrow's memory-mapped file reading
|
||||
- Avoid unnecessary data copying during deserialization
|
||||
- Use Apache Arrow's native IPC reader
|
||||
|
||||
### Exponential Backoff
|
||||
- Maximum retry count: 5
|
||||
- Base delay: 100ms, max delay: 5000ms
|
||||
- Implemented in both Julia and JavaScript implementations
|
||||
|
||||
### Correlation ID Logging
|
||||
- Log correlation_id at every stage
|
||||
- Include: send, receive, serialize, deserialize
|
||||
- Use structured logging format
|
||||
|
||||
## Testing
|
||||
|
||||
Run the test scripts:
|
||||
|
||||
```bash
|
||||
# Scenario 1: Command & Control (JavaScript sender)
|
||||
node test/scenario1_command_control.js
|
||||
|
||||
# Scenario 2: Large Arrow Table (JavaScript sender)
|
||||
node test/scenario2_large_table.js
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **NATS Connection Failed**
|
||||
- Ensure NATS server is running
|
||||
- Check NATS_URL configuration
|
||||
|
||||
2. **HTTP Upload Failed**
|
||||
- Ensure file server is running
|
||||
- Check FILESERVER_URL configuration
|
||||
- Verify upload permissions
|
||||
|
||||
3. **Arrow IPC Deserialization Error**
|
||||
- Ensure data is properly serialized to Arrow format
|
||||
- Check Arrow version compatibility
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
File diff suppressed because it is too large
Load Diff
1178
docs/implementation.md
Normal file
1178
docs/implementation.md
Normal file
File diff suppressed because it is too large
Load Diff
45
etc.jl
45
etc.jl
@@ -1,42 +1,9 @@
|
||||
Task: Update README.md to reflect recent changes in NATSbridge package.
|
||||
|
||||
""" fileServerURL = "http://192.168.88.104:8080"
|
||||
filepath = "/home/ton/docker-apps/sendreceive/image/test.zip"
|
||||
filename = basename(filepath)
|
||||
filebytes = read(filepath)
|
||||
Context: the package has been updated with the NATS_connection keyword and the publish_message function.
|
||||
|
||||
plik_oneshot_upload - Upload a single file to a plik server using one-shot mode
|
||||
Requirements:
|
||||
|
||||
This function uploads a raw byte array to a plik server in one-shot mode (no upload session).
|
||||
It first creates a one-shot upload session by sending a POST request with `{"OneShot": true}`,
|
||||
retrieves an upload ID and token, then uploads the file data as multipart form data using the token.
|
||||
|
||||
The function handles the entire flow:
|
||||
1. Obtains an upload ID and token from the server
|
||||
2. Uploads the provided binary data as a file using the `X-UploadToken` header
|
||||
3. Returns identifiers and download URL for the uploaded file
|
||||
|
||||
# Arguments:
|
||||
- `fileServerURL::String` - Base URL of the plik server (e.g., `"http://192.168.88.104:8080"`)
|
||||
- `filename::String` - Name of the file being uploaded
|
||||
- `data::Vector{UInt8}` - Raw byte data of the file content
|
||||
|
||||
# Return:
|
||||
- A named tuple with fields:
|
||||
- `uploadid::String` - ID of the one-shot upload session
|
||||
- `fileid::String` - ID of the uploaded file within the session
|
||||
- `downloadurl::String` - Full URL to download the uploaded file
|
||||
|
||||
# Example
|
||||
```jldoctest
|
||||
using HTTP, JSON
|
||||
|
||||
# Example data: "Hello" as bytes
|
||||
data = collect("Hello World!" |> collect |> CodeUnits |> collect)
|
||||
|
||||
# Upload to local plik server
|
||||
result = plik_oneshot_upload("http://192.168.88.104:8080", "hello.txt", data)
|
||||
|
||||
# Download URL for the uploaded file
|
||||
println(result.downloadurl)
|
||||
```
|
||||
"""
|
||||
Source of Truth: Treat the updated NATSbridge code as the definitive source. Update README.md to align exactly with these changes.
|
||||
API Consistency: Ensure the Main Package API (e.g., smartsend(), publish_message()) uses consistent naming across all three supported languages.
|
||||
Ecosystem Variance: Low-level native functions (e.g., NATS.connect(), JSON.read()) should follow the conventions of the specific language ecosystem and do not require cross-language consistency.
|
||||
622
examples/tutorial.md
Normal file
622
examples/tutorial.md
Normal file
@@ -0,0 +1,622 @@
|
||||
# NATSBridge Tutorial
|
||||
|
||||
A step-by-step guide to get started with NATSBridge - a high-performance, bi-directional data bridge for **Julia**, **JavaScript**, and **Python/Micropython**.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Overview](#overview)
|
||||
2. [Prerequisites](#prerequisites)
|
||||
3. [Installation](#installation)
|
||||
4. [Quick Start](#quick-start)
|
||||
5. [Basic Examples](#basic-examples)
|
||||
6. [Advanced Usage](#advanced-usage)
|
||||
7. [Cross-Platform Communication](#cross-platform-communication)
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
NATSBridge enables seamless communication between Julia, JavaScript, and Python/Micropython applications through NATS, with automatic transport selection based on payload size:
|
||||
|
||||
- **Direct Transport**: Payloads < 1MB are sent directly via NATS (Base64 encoded)
|
||||
- **Link Transport**: Payloads >= 1MB are uploaded to an HTTP file server and referenced via URL
|
||||
|
||||
### Supported Payload Types
|
||||
|
||||
| Type | Description |
|
||||
|------|-------------|
|
||||
| `text` | Plain text strings |
|
||||
| `dictionary` | JSON-serializable dictionaries |
|
||||
| `table` | Tabular data (Arrow IPC format) |
|
||||
| `image` | Image data (PNG, JPG bytes) |
|
||||
| `audio` | Audio data (WAV, MP3 bytes) |
|
||||
| `video` | Video data (MP4, AVI bytes) |
|
||||
| `binary` | Generic binary data |
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before you begin, ensure you have:
|
||||
|
||||
1. **NATS Server** running (or accessible)
|
||||
2. **HTTP File Server** (optional, for large payloads > 1MB)
|
||||
3. **One of the supported platforms**: Julia, JavaScript (Node.js), or Python/Micropython
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
### Julia
|
||||
|
||||
```julia
|
||||
using Pkg
|
||||
Pkg.add("NATS")
|
||||
Pkg.add("Arrow")
|
||||
Pkg.add("JSON3")
|
||||
Pkg.add("HTTP")
|
||||
Pkg.add("UUIDs")
|
||||
Pkg.add("Dates")
|
||||
```
|
||||
|
||||
### JavaScript
|
||||
|
||||
```bash
|
||||
npm install nats.js apache-arrow uuid base64-url
|
||||
```
|
||||
|
||||
### Python/Micropython
|
||||
|
||||
1. Copy `src/nats_bridge.py` to your device
|
||||
2. Install dependencies:
|
||||
|
||||
**For Python (desktop):**
|
||||
```bash
|
||||
pip install nats-py
|
||||
```
|
||||
|
||||
**For Micropython:**
|
||||
- `urequests` for HTTP requests
|
||||
- `base64` for base64 encoding (built-in)
|
||||
- `json` for JSON handling (built-in)
|
||||
|
||||
---
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Step 1: Start NATS Server
|
||||
|
||||
```bash
|
||||
docker run -p 4222:4222 nats:latest
|
||||
```
|
||||
|
||||
### Step 2: Start HTTP File Server (Optional)
|
||||
|
||||
```bash
|
||||
# Create a directory for file uploads
|
||||
mkdir -p /tmp/fileserver
|
||||
|
||||
# Use Python's built-in server
|
||||
python3 -m http.server 8080 --directory /tmp/fileserver
|
||||
```
|
||||
|
||||
### Step 3: Send Your First Message
|
||||
|
||||
#### Python/Micropython
|
||||
|
||||
```python
|
||||
from nats_bridge import smartsend
|
||||
|
||||
# Send a text message (is_publish=True by default)
|
||||
data = [("message", "Hello World", "text")]
|
||||
env, env_json_str = smartsend("/chat/room1", data, broker_url="nats://localhost:4222")
|
||||
print("Message sent!")
|
||||
|
||||
# Or use is_publish=False to get envelope and JSON without publishing
|
||||
env, env_json_str = smartsend("/chat/room1", data, broker_url="nats://localhost:4222", is_publish=False)
|
||||
# env: MessageEnvelope object
|
||||
# env_json_str: JSON string for publishing to NATS
|
||||
```
|
||||
|
||||
#### JavaScript
|
||||
|
||||
```javascript
|
||||
const { smartsend } = require('./src/NATSBridge');
|
||||
|
||||
// Send a text message (isPublish=true by default)
|
||||
await smartsend("/chat/room1", [
|
||||
{ dataname: "message", data: "Hello World", type: "text" }
|
||||
], { brokerUrl: "nats://localhost:4222" });
|
||||
|
||||
console.log("Message sent!");
|
||||
|
||||
// Or use isPublish=false to get envelope and JSON without publishing
|
||||
const { env, env_json_str } = await smartsend("/chat/room1", [
|
||||
{ dataname: "message", data: "Hello World", type: "text" }
|
||||
], { brokerUrl: "nats://localhost:4222", isPublish: false });
|
||||
// env: MessageEnvelope object
|
||||
// env_json_str: JSON string for publishing to NATS
|
||||
```
|
||||
|
||||
#### Julia
|
||||
|
||||
```julia
|
||||
using NATSBridge
|
||||
|
||||
# Send a text message
|
||||
data = [("message", "Hello World", "text")]
|
||||
env, env_json_str = smartsend("/chat/room1", data, broker_url="nats://localhost:4222")
|
||||
# env: msg_envelope_v1 object with all metadata and payloads
|
||||
# env_json_str: JSON string representation of the envelope for publishing
|
||||
println("Message sent!")
|
||||
```
|
||||
|
||||
### Step 4: Receive Messages
|
||||
|
||||
#### Python/Micropython
|
||||
|
||||
```python
|
||||
from nats_bridge import smartreceive
|
||||
|
||||
# Receive and process message
|
||||
env = smartreceive(msg)
|
||||
for dataname, data, type in env["payloads"]:
|
||||
print(f"Received {dataname}: {data}")
|
||||
```
|
||||
|
||||
#### JavaScript
|
||||
|
||||
```javascript
|
||||
const { smartreceive } = require('./src/NATSBridge');
|
||||
|
||||
// Receive and process message
|
||||
const env = await smartreceive(msg);
|
||||
for (const payload of env.payloads) {
|
||||
console.log(`Received ${payload.dataname}: ${payload.data}`);
|
||||
}
|
||||
```
|
||||
|
||||
#### Julia
|
||||
|
||||
```julia
|
||||
using NATSBridge
|
||||
|
||||
# Receive and process message
|
||||
env = smartreceive(msg; fileserver_download_handler=_fetch_with_backoff)
|
||||
for (dataname, data, type) in env["payloads"]
|
||||
println("Received $dataname: $data")
|
||||
end
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Basic Examples
|
||||
|
||||
### Example 1: Sending a Dictionary
|
||||
|
||||
#### Python/Micropython
|
||||
|
||||
```python
|
||||
from nats_bridge import smartsend
|
||||
|
||||
# Create configuration dictionary
|
||||
config = {
|
||||
"wifi_ssid": "MyNetwork",
|
||||
"wifi_password": "password123",
|
||||
"update_interval": 60
|
||||
}
|
||||
|
||||
# Send as dictionary type
|
||||
data = [("config", config, "dictionary")]
|
||||
env, env_json_str = smartsend("/device/config", data, broker_url="nats://localhost:4222")
|
||||
```
|
||||
|
||||
#### JavaScript
|
||||
|
||||
```javascript
|
||||
const { smartsend } = require('./src/NATSBridge');
|
||||
|
||||
const config = {
|
||||
wifi_ssid: "MyNetwork",
|
||||
wifi_password: "password123",
|
||||
update_interval: 60
|
||||
};
|
||||
|
||||
const { env, env_json_str } = await smartsend("/device/config", [
|
||||
{ dataname: "config", data: config, type: "dictionary" }
|
||||
], { brokerUrl: "nats://localhost:4222" });
|
||||
```
|
||||
|
||||
#### Julia
|
||||
|
||||
```julia
|
||||
using NATSBridge
|
||||
|
||||
config = Dict(
|
||||
"wifi_ssid" => "MyNetwork",
|
||||
"wifi_password" => "password123",
|
||||
"update_interval" => 60
|
||||
)
|
||||
|
||||
data = [("config", config, "dictionary")]
|
||||
env, env_json_str = smartsend("/device/config", data, broker_url="nats://localhost:4222")
|
||||
```
|
||||
|
||||
### Example 2: Sending Binary Data (Image)
|
||||
|
||||
#### Python/Micropython
|
||||
|
||||
```python
|
||||
from nats_bridge import smartsend
|
||||
|
||||
# Read image file
|
||||
with open("image.png", "rb") as f:
|
||||
image_data = f.read()
|
||||
|
||||
# Send as binary type
|
||||
data = [("user_image", image_data, "binary")]
|
||||
env, env_json_str = smartsend("/chat/image", data, broker_url="nats://localhost:4222")
|
||||
```
|
||||
|
||||
#### JavaScript
|
||||
|
||||
```javascript
|
||||
const { smartsend } = require('./src/NATSBridge');
|
||||
|
||||
// Read image file (Node.js)
|
||||
const fs = require('fs');
|
||||
const image_data = fs.readFileSync('image.png');
|
||||
|
||||
const { env, env_json_str } = await smartsend("/chat/image", [
|
||||
{ dataname: "user_image", data: image_data, type: "binary" }
|
||||
], { brokerUrl: "nats://localhost:4222" });
|
||||
```
|
||||
|
||||
#### Julia
|
||||
|
||||
```julia
|
||||
using NATSBridge
|
||||
|
||||
# Read image file
|
||||
image_data = read("image.png")
|
||||
|
||||
data = [("user_image", image_data, "binary")]
|
||||
env, env_json_str = smartsend("/chat/image", data, broker_url="nats://localhost:4222")
|
||||
```
|
||||
|
||||
### Example 3: Request-Response Pattern
|
||||
|
||||
#### Python/Micropython (Requester)
|
||||
|
||||
```python
|
||||
from nats_bridge import smartsend
|
||||
|
||||
# Send command with reply-to
|
||||
data = [("command", {"action": "read_sensor"}, "dictionary")]
|
||||
env, env_json_str = smartsend(
|
||||
"/device/command",
|
||||
data,
|
||||
broker_url="nats://localhost:4222",
|
||||
reply_to="/device/response",
|
||||
reply_to_msg_id="cmd-001"
|
||||
)
|
||||
# env: MessageEnvelope object
|
||||
# env_json_str: JSON string for publishing to NATS
|
||||
```
|
||||
|
||||
#### JavaScript (Responder)
|
||||
|
||||
```javascript
|
||||
const { smartreceive, smartsend } = require('./src/NATSBridge');
|
||||
|
||||
// Subscribe to command topic
|
||||
const sub = nc.subscribe("/device/command");
|
||||
|
||||
for await (const msg of sub) {
|
||||
const env = await smartreceive(msg);
|
||||
|
||||
// Process command
|
||||
for (const payload of env.payloads) {
|
||||
if (payload.dataname === "command") {
|
||||
const command = payload.data;
|
||||
|
||||
if (command.action === "read_sensor") {
|
||||
// Read sensor and send response
|
||||
const response = {
|
||||
sensor_id: "sensor-001",
|
||||
value: 42.5,
|
||||
timestamp: new Date().toISOString()
|
||||
};
|
||||
|
||||
await smartsend("/device/response", [
|
||||
{ dataname: "sensor_data", data: response, type: "dictionary" }
|
||||
], {
|
||||
reply_to: env.replyTo,
|
||||
reply_to_msg_id: env.msgId
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Advanced Usage
|
||||
|
||||
### Example 4: Large Payloads (File Server)
|
||||
|
||||
For payloads larger than 1MB, NATSBridge automatically uses the file server:
|
||||
|
||||
#### Python/Micropython
|
||||
|
||||
```python
|
||||
from nats_bridge import smartsend
|
||||
import os
|
||||
|
||||
# Create large data (> 1MB)
|
||||
large_data = os.urandom(2_000_000) # 2MB of random data
|
||||
|
||||
# Send with file server URL
|
||||
env, env_json_str = smartsend(
|
||||
"/data/large",
|
||||
[("large_file", large_data, "binary")],
|
||||
broker_url="nats://localhost:4222",
|
||||
fileserver_url="http://localhost:8080",
|
||||
size_threshold=1_000_000
|
||||
)
|
||||
|
||||
# The envelope will contain the download URL
|
||||
print(f"File uploaded to: {env.payloads[0].data}")
|
||||
```
|
||||
|
||||
#### JavaScript
|
||||
|
||||
```javascript
|
||||
const { smartsend } = require('./src/NATSBridge');
|
||||
|
||||
// Create large data (> 1MB)
|
||||
const largeData = new ArrayBuffer(2_000_000);
|
||||
const view = new Uint8Array(largeData);
|
||||
view.fill(42); // Fill with some data
|
||||
|
||||
const { env, env_json_str } = await smartsend("/data/large", [
|
||||
{ dataname: "large_file", data: largeData, type: "binary" }
|
||||
], {
|
||||
brokerUrl: "nats://localhost:4222",
|
||||
fileserverUrl: "http://localhost:8080",
|
||||
sizeThreshold: 1_000_000
|
||||
});
|
||||
```
|
||||
|
||||
#### Julia
|
||||
|
||||
```julia
|
||||
using NATSBridge
|
||||
|
||||
# Create large data (> 1MB)
|
||||
large_data = rand(UInt8, 2_000_000)
|
||||
|
||||
env, env_json_str = smartsend(
|
||||
"/data/large",
|
||||
[("large_file", large_data, "binary")],
|
||||
broker_url="nats://localhost:4222",
|
||||
fileserver_url="http://localhost:8080"
|
||||
)
|
||||
|
||||
# The envelope will contain the download URL
|
||||
println("File uploaded to: $(env.payloads[1].data)")
|
||||
```
|
||||
|
||||
### Example 5: Mixed Content (Chat with Text + Image)
|
||||
|
||||
NATSBridge supports sending multiple payloads with different types in a single message:
|
||||
|
||||
#### Python/Micropython
|
||||
|
||||
```python
|
||||
from nats_bridge import smartsend
|
||||
|
||||
# Read image file
|
||||
with open("avatar.png", "rb") as f:
|
||||
image_data = f.read()
|
||||
|
||||
# Send mixed content
|
||||
data = [
|
||||
("message_text", "Hello with image!", "text"),
|
||||
("user_avatar", image_data, "image")
|
||||
]
|
||||
|
||||
env, env_json_str = smartsend("/chat/mixed", data, broker_url="nats://localhost:4222")
|
||||
```
|
||||
|
||||
#### JavaScript
|
||||
|
||||
```javascript
|
||||
const { smartsend } = require('./src/NATSBridge');
|
||||
|
||||
const fs = require('fs');
|
||||
|
||||
const { env, env_json_str } = await smartsend("/chat/mixed", [
|
||||
{
|
||||
dataname: "message_text",
|
||||
data: "Hello with image!",
|
||||
type: "text"
|
||||
},
|
||||
{
|
||||
dataname: "user_avatar",
|
||||
data: fs.readFileSync("avatar.png"),
|
||||
type: "image"
|
||||
}
|
||||
], { brokerUrl: "nats://localhost:4222" });
|
||||
```
|
||||
|
||||
#### Julia
|
||||
|
||||
```julia
|
||||
using NATSBridge
|
||||
|
||||
image_data = read("avatar.png")
|
||||
|
||||
data = [
|
||||
("message_text", "Hello with image!", "text"),
|
||||
("user_avatar", image_data, "image")
|
||||
]
|
||||
|
||||
env, env_json_str = smartsend("/chat/mixed", data, broker_url="nats://localhost:4222")
|
||||
```
|
||||
|
||||
### Example 6: Table Data (Arrow IPC)
|
||||
|
||||
For tabular data, NATSBridge uses Apache Arrow IPC format:
|
||||
|
||||
#### Python/Micropython
|
||||
|
||||
```python
|
||||
from nats_bridge import smartsend
|
||||
import pandas as pd
|
||||
|
||||
# Create DataFrame
|
||||
df = pd.DataFrame({
|
||||
"id": [1, 2, 3],
|
||||
"name": ["Alice", "Bob", "Charlie"],
|
||||
"score": [95, 88, 92]
|
||||
})
|
||||
|
||||
# Send as table type
|
||||
data = [("students", df, "table")]
|
||||
env, env_json_str = smartsend("/data/students", data, broker_url="nats://localhost:4222")
|
||||
```
|
||||
|
||||
#### Julia
|
||||
|
||||
```julia
|
||||
using NATSBridge
|
||||
using DataFrames
|
||||
|
||||
# Create DataFrame
|
||||
df = DataFrame(
|
||||
id = [1, 2, 3],
|
||||
name = ["Alice", "Bob", "Charlie"],
|
||||
score = [95, 88, 92]
|
||||
)
|
||||
|
||||
data = [("students", df, "table")]
|
||||
env, env_json_str = smartsend("/data/students", data, broker_url="nats://localhost:4222")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Cross-Platform Communication
|
||||
|
||||
NATSBridge enables seamless communication between different platforms:
|
||||
|
||||
### Julia ↔ JavaScript
|
||||
|
||||
#### Julia Sender
|
||||
|
||||
```julia
|
||||
using NATSBridge
|
||||
|
||||
# Send dictionary from Julia to JavaScript
|
||||
config = Dict("step_size" => 0.01, "iterations" => 1000)
|
||||
data = [("config", config, "dictionary")]
|
||||
env, env_json_str = smartsend("/analysis/config", data, broker_url="nats://localhost:4222")
|
||||
```
|
||||
|
||||
#### JavaScript Receiver
|
||||
|
||||
```javascript
|
||||
const { smartreceive } = require('./src/NATSBridge');
|
||||
|
||||
// Receive dictionary from Julia
|
||||
const env = await smartreceive(msg);
|
||||
for (const payload of env.payloads) {
|
||||
if (payload.type === "dictionary") {
|
||||
console.log("Received config:", payload.data);
|
||||
// payload.data = { step_size: 0.01, iterations: 1000 }
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### JavaScript ↔ Python
|
||||
|
||||
#### JavaScript Sender
|
||||
|
||||
```javascript
|
||||
const { smartsend } = require('./src/NATSBridge');
|
||||
|
||||
const { env, env_json_str } = await smartsend("/data/transfer", [
|
||||
{ dataname: "message", data: "Hello from JS!", type: "text" }
|
||||
], { brokerUrl: "nats://localhost:4222" });
|
||||
```
|
||||
|
||||
#### Python Receiver
|
||||
|
||||
```python
|
||||
from nats_bridge import smartreceive
|
||||
|
||||
env = smartreceive(msg)
|
||||
for dataname, data, type in env["payloads"]:
|
||||
if type == "text":
|
||||
print(f"Received from JS: {data}")
|
||||
```
|
||||
|
||||
### Python ↔ Julia
|
||||
|
||||
#### Python Sender
|
||||
|
||||
```python
|
||||
from nats_bridge import smartsend
|
||||
|
||||
data = [("message", "Hello from Python!", "text")]
|
||||
env, env_json_str = smartsend("/chat/python", data, broker_url="nats://localhost:4222")
|
||||
```
|
||||
|
||||
#### Julia Receiver
|
||||
|
||||
```julia
|
||||
using NATSBridge
|
||||
|
||||
env = smartreceive(msg; fileserver_download_handler=_fetch_with_backoff)
|
||||
for (dataname, data, type) in env["payloads"]
|
||||
if type == "text"
|
||||
println("Received from Python: $data")
|
||||
end
|
||||
end
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Explore the test directory** for more examples
|
||||
2. **Check the documentation** for advanced configuration options
|
||||
3. **Join the community** to share your use cases
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Connection Issues
|
||||
|
||||
- Ensure NATS server is running: `docker ps | grep nats`
|
||||
- Check firewall settings
|
||||
- Verify NATS URL configuration
|
||||
|
||||
### File Server Issues
|
||||
|
||||
- Ensure file server is running and accessible
|
||||
- Check upload permissions
|
||||
- Verify file server URL configuration
|
||||
|
||||
### Serialization Errors
|
||||
|
||||
- Verify data type matches the specified type
|
||||
- Check that binary data is in the correct format (bytes/Vector{UInt8})
|
||||
|
||||
---
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
||||
1073
examples/walkthrough.md
Normal file
1073
examples/walkthrough.md
Normal file
File diff suppressed because it is too large
Load Diff
28
package.json
Normal file
28
package.json
Normal file
@@ -0,0 +1,28 @@
|
||||
{
|
||||
"name": "natsbridge",
|
||||
"version": "1.0.0",
|
||||
"description": "Bi-Directional Data Bridge for JavaScript using NATS",
|
||||
"main": "src/NATSBridge.js",
|
||||
"scripts": {
|
||||
"test": "echo \"Error: no test specified\" && exit 1",
|
||||
"lint": "eslint src/*.js test/*.js"
|
||||
},
|
||||
"keywords": [
|
||||
"nats",
|
||||
"message-broker",
|
||||
"bridge",
|
||||
"arrow",
|
||||
"serialization"
|
||||
],
|
||||
"author": "",
|
||||
"license": "MIT",
|
||||
"dependencies": {
|
||||
"nats": "^2.9.0",
|
||||
"apache-arrow": "^14.0.0",
|
||||
"uuid": "^9.0.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"eslint": "^8.0.0",
|
||||
"jest": "^29.0.0"
|
||||
}
|
||||
}
|
||||
14
plik_fileserver/docker-compose.yml
Normal file
14
plik_fileserver/docker-compose.yml
Normal file
@@ -0,0 +1,14 @@
|
||||
services:
|
||||
plik:
|
||||
image: rootgg/plik:latest
|
||||
container_name: plik-server
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "8080:8080"
|
||||
volumes:
|
||||
# # Mount the config file (created below)
|
||||
# - ./plikd.cfg:/home/plik/server/plikd.cfg
|
||||
# Mount local folder for uploads and database
|
||||
- ./plik-data:/data
|
||||
# Set user to match your host UID to avoid permission issues
|
||||
user: "1000:1000"
|
||||
1348
src/NATSBridge.jl
1348
src/NATSBridge.jl
File diff suppressed because it is too large
Load Diff
@@ -1,245 +1,753 @@
|
||||
/**
|
||||
* Bi-Directional Data Bridge - JavaScript Module
|
||||
* Implements SmartSend and SmartReceive for NATS communication
|
||||
* NATSBridge.js - Bi-Directional Data Bridge for JavaScript
|
||||
* Implements smartsend and smartreceive for NATS communication
|
||||
*
|
||||
* This module provides functionality for sending and receiving data across network boundaries
|
||||
* using NATS as the message bus, with support for both direct payload transport and
|
||||
* URL-based transport for larger payloads.
|
||||
*
|
||||
* File Server Handler Architecture:
|
||||
* The system uses handler functions to abstract file server operations, allowing support
|
||||
* for different file server implementations (e.g., Plik, AWS S3, custom HTTP server).
|
||||
*
|
||||
* Handler Function Signatures:
|
||||
*
|
||||
* ```javascript
|
||||
* // Upload handler - uploads data to file server and returns URL
|
||||
* // The handler is passed to smartsend as fileserverUploadHandler parameter
|
||||
* // It receives: (fileserver_url, dataname, data)
|
||||
* // Returns: { status, uploadid, fileid, url }
|
||||
* async function plik_oneshot_upload(fileserver_url, dataname, data) { ... }
|
||||
*
|
||||
* // Download handler - fetches data from file server URL with exponential backoff
|
||||
* // The handler is passed to smartreceive as fileserverDownloadHandler parameter
|
||||
* // It receives: (url, max_retries, base_delay, max_delay, correlation_id)
|
||||
* // Returns: ArrayBuffer (the downloaded data)
|
||||
* async function fileserverDownloadHandler(url, max_retries, base_delay, max_delay, correlation_id) { ... }
|
||||
* ```
|
||||
*
|
||||
* Multi-Payload Support (Standard API):
|
||||
* The system uses a standardized list-of-tuples format for all payload operations.
|
||||
* Even when sending a single payload, the user must wrap it in a list.
|
||||
*
|
||||
* API Standard:
|
||||
* ```javascript
|
||||
* // Input format for smartsend (always a list of tuples with type info)
|
||||
* [{ dataname, data, type }, ...]
|
||||
*
|
||||
* // Output format for smartreceive (always returns a list of tuples)
|
||||
* [{ dataname, data, type }, ...]
|
||||
* ```
|
||||
*
|
||||
* Supported types: "text", "dictionary", "table", "image", "audio", "video", "binary"
|
||||
*/
|
||||
|
||||
const { v4: uuidv4 } = require('uuid');
|
||||
const { decode, encode } = require('base64-url');
|
||||
const Arrow = require('apache-arrow');
|
||||
// ---------------------------------------------- 100 --------------------------------------------- #
|
||||
|
||||
// Constants
|
||||
const DEFAULT_SIZE_THRESHOLD = 1_000_000; // 1MB
|
||||
const DEFAULT_NATS_URL = 'nats://localhost:4222';
|
||||
const DEFAULT_FILESERVER_URL = 'http://localhost:8080/upload';
|
||||
const DEFAULT_SIZE_THRESHOLD = 1_000_000; // 1MB - threshold for switching from direct to link transport
|
||||
const DEFAULT_NATS_URL = "nats://localhost:4222"; // Default NATS server URL
|
||||
const DEFAULT_FILESERVER_URL = "http://localhost:8080"; // Default HTTP file server URL for link transport
|
||||
|
||||
// Logging helper
|
||||
function logTrace(correlationId, message) {
|
||||
// Helper: Generate UUID v4
|
||||
function uuid4() {
|
||||
// Simple UUID v4 generator
|
||||
return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
|
||||
var r = Math.random() * 16 | 0, v = c == 'x' ? r : (r & 0x3 | 0x8);
|
||||
return v.toString(16);
|
||||
});
|
||||
}
|
||||
|
||||
// Helper: Log with correlation ID and timestamp
|
||||
function log_trace(correlation_id, message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] [Correlation: ${correlationId}] ${message}`);
|
||||
console.log(`[${timestamp}] [Correlation: ${correlation_id}] ${message}`);
|
||||
}
|
||||
|
||||
// Message Envelope Class
|
||||
class MessageEnvelope {
|
||||
constructor(options = {}) {
|
||||
this.correlation_id = options.correlation_id || uuidv4();
|
||||
this.type = options.type || 'json';
|
||||
this.transport = options.transport || 'direct';
|
||||
this.payload = options.payload || null;
|
||||
this.url = options.url || null;
|
||||
this.metadata = options.metadata || {};
|
||||
}
|
||||
|
||||
static fromJSON(jsonStr) {
|
||||
const data = JSON.parse(jsonStr);
|
||||
return new MessageEnvelope({
|
||||
correlation_id: data.correlation_id,
|
||||
type: data.type,
|
||||
transport: data.transport,
|
||||
payload: data.payload || null,
|
||||
url: data.url || null,
|
||||
metadata: data.metadata || {}
|
||||
});
|
||||
}
|
||||
|
||||
toJSON() {
|
||||
const obj = {
|
||||
correlation_id: this.correlation_id,
|
||||
type: this.type,
|
||||
transport: this.transport
|
||||
};
|
||||
|
||||
if (this.payload) {
|
||||
obj.payload = this.payload;
|
||||
}
|
||||
|
||||
if (this.url) {
|
||||
obj.url = this.url;
|
||||
}
|
||||
|
||||
if (Object.keys(this.metadata).length > 0) {
|
||||
obj.metadata = this.metadata;
|
||||
}
|
||||
|
||||
return JSON.stringify(obj);
|
||||
// Helper: Get size of data in bytes
|
||||
function getDataSize(data) {
|
||||
if (typeof data === 'string') {
|
||||
return new TextEncoder().encode(data).length;
|
||||
} else if (data instanceof ArrayBuffer || data instanceof Uint8Array) {
|
||||
return data.byteLength;
|
||||
} else if (typeof data === 'object' && data !== null) {
|
||||
// For objects, serialize to JSON and measure
|
||||
return new TextEncoder().encode(JSON.stringify(data)).length;
|
||||
}
|
||||
return 0;
|
||||
}
|
||||
|
||||
// SmartSend for JavaScript - Handles transport selection based on payload size
|
||||
async function SmartSend(subject, data, type = 'json', options = {}) {
|
||||
const {
|
||||
natsUrl = DEFAULT_NATS_URL,
|
||||
fileserverUrl = DEFAULT_FILESERVER_URL,
|
||||
sizeThreshold = DEFAULT_SIZE_THRESHOLD,
|
||||
correlationId = uuidv4()
|
||||
} = options;
|
||||
|
||||
logTrace(correlationId, `Starting SmartSend for subject: ${subject}`);
|
||||
|
||||
// Serialize data based on type
|
||||
const payloadBytes = _serializeData(data, type, correlationId);
|
||||
const payloadSize = payloadBytes.length;
|
||||
|
||||
logTrace(correlationId, `Serialized payload size: ${payloadSize} bytes`);
|
||||
|
||||
// Decision: Direct vs Link
|
||||
if (payloadSize < sizeThreshold) {
|
||||
// Direct path - Base64 encode and send via NATS
|
||||
const payloadBase64 = encode(payloadBytes);
|
||||
logTrace(correlationId, `Using direct transport for ${payloadSize} bytes`);
|
||||
|
||||
const env = new MessageEnvelope({
|
||||
correlation_id: correlationId,
|
||||
type: type,
|
||||
transport: 'direct',
|
||||
payload: payloadBase64,
|
||||
metadata: {
|
||||
content_length: payloadSize.toString(),
|
||||
format: 'arrow_ipc_stream'
|
||||
}
|
||||
});
|
||||
|
||||
await publishMessage(natsUrl, subject, env.toJSON(), correlationId);
|
||||
return env;
|
||||
} else {
|
||||
// Link path - Upload to HTTP server, send URL via NATS
|
||||
logTrace(correlationId, `Using link transport, uploading to fileserver`);
|
||||
|
||||
const url = await uploadToServer(payloadBytes, fileserverUrl, correlationId);
|
||||
|
||||
const env = new MessageEnvelope({
|
||||
correlation_id: correlationId,
|
||||
type: type,
|
||||
transport: 'link',
|
||||
url: url,
|
||||
metadata: {
|
||||
content_length: payloadSize.toString(),
|
||||
format: 'arrow_ipc_stream'
|
||||
}
|
||||
});
|
||||
|
||||
await publishMessage(natsUrl, subject, env.toJSON(), correlationId);
|
||||
return env;
|
||||
// Helper: Convert ArrayBuffer to Base64 string
|
||||
function arrayBufferToBase64(buffer) {
|
||||
const bytes = new Uint8Array(buffer);
|
||||
let binary = '';
|
||||
for (let i = 0; i < bytes.length; i++) {
|
||||
binary += String.fromCharCode(bytes[i]);
|
||||
}
|
||||
return btoa(binary);
|
||||
}
|
||||
|
||||
// Helper: Convert Base64 string to ArrayBuffer
|
||||
function base64ToArrayBuffer(base64) {
|
||||
const binaryString = atob(base64);
|
||||
const len = binaryString.length;
|
||||
const bytes = new Uint8Array(len);
|
||||
for (let i = 0; i < len; i++) {
|
||||
bytes[i] = binaryString.charCodeAt(i);
|
||||
}
|
||||
return bytes.buffer;
|
||||
}
|
||||
|
||||
// Helper: Convert Uint8Array to Base64 string
|
||||
function uint8ArrayToBase64(uint8array) {
|
||||
let binary = '';
|
||||
for (let i = 0; i < uint8array.byteLength; i++) {
|
||||
binary += String.fromCharCode(uint8array[i]);
|
||||
}
|
||||
return btoa(binary);
|
||||
}
|
||||
|
||||
// Helper: Convert Base64 string to Uint8Array
|
||||
function base64ToUint8Array(base64) {
|
||||
const binaryString = atob(base64);
|
||||
const len = binaryString.length;
|
||||
const bytes = new Uint8Array(len);
|
||||
for (let i = 0; i < len; i++) {
|
||||
bytes[i] = binaryString.charCodeAt(i);
|
||||
}
|
||||
return bytes;
|
||||
}
|
||||
|
||||
// Helper: Serialize data based on type
|
||||
function _serializeData(data, type, correlationId) {
|
||||
if (type === 'json') {
|
||||
const jsonStr = JSON.stringify(data);
|
||||
return Buffer.from(jsonStr, 'utf8');
|
||||
} else if (type === 'table') {
|
||||
// Table data - convert to Arrow IPC stream
|
||||
const writer = new Arrow.Writer();
|
||||
writer.writeTable(data);
|
||||
return writer.toByteArray();
|
||||
} else if (type === 'binary') {
|
||||
// Binary data - treat as binary
|
||||
if (data instanceof Buffer) {
|
||||
return data;
|
||||
} else if (Array.isArray(data)) {
|
||||
return Buffer.from(data);
|
||||
function _serialize_data(data, type) {
|
||||
/**
|
||||
* Serialize data according to specified format
|
||||
*
|
||||
* Supported formats:
|
||||
* - "text": Treats data as text and converts to UTF-8 bytes
|
||||
* - "dictionary": Serializes data as JSON and returns the UTF-8 byte representation
|
||||
* - "table": Serializes data as an Arrow IPC stream (table format) - NOT IMPLEMENTED (requires arrow library)
|
||||
* - "image": Expects binary data (ArrayBuffer) and returns it as bytes
|
||||
* - "audio": Expects binary data (ArrayBuffer) and returns it as bytes
|
||||
* - "video": Expects binary data (ArrayBuffer) and returns it as bytes
|
||||
* - "binary": Generic binary data (ArrayBuffer or Uint8Array) and returns bytes
|
||||
*/
|
||||
if (type === "text") {
|
||||
if (typeof data === 'string') {
|
||||
return new TextEncoder().encode(data);
|
||||
} else {
|
||||
throw new Error('Binary data must be binary (Buffer or Array)');
|
||||
throw new Error("Text data must be a String");
|
||||
}
|
||||
} else if (type === "dictionary") {
|
||||
// JSON data - serialize directly
|
||||
const jsonStr = JSON.stringify(data);
|
||||
return new TextEncoder().encode(jsonStr);
|
||||
} else if (type === "table") {
|
||||
// Table data - convert to Arrow IPC stream (NOT IMPLEMENTED in pure JavaScript)
|
||||
// This would require the apache-arrow library
|
||||
throw new Error("Table serialization requires apache-arrow library");
|
||||
} else if (type === "image") {
|
||||
if (data instanceof ArrayBuffer || data instanceof Uint8Array) {
|
||||
return data instanceof ArrayBuffer ? new Uint8Array(data) : data;
|
||||
} else {
|
||||
throw new Error("Image data must be ArrayBuffer or Uint8Array");
|
||||
}
|
||||
} else if (type === "audio") {
|
||||
if (data instanceof ArrayBuffer || data instanceof Uint8Array) {
|
||||
return data instanceof ArrayBuffer ? new Uint8Array(data) : data;
|
||||
} else {
|
||||
throw new Error("Audio data must be ArrayBuffer or Uint8Array");
|
||||
}
|
||||
} else if (type === "video") {
|
||||
if (data instanceof ArrayBuffer || data instanceof Uint8Array) {
|
||||
return data instanceof ArrayBuffer ? new Uint8Array(data) : data;
|
||||
} else {
|
||||
throw new Error("Video data must be ArrayBuffer or Uint8Array");
|
||||
}
|
||||
} else if (type === "binary") {
|
||||
if (data instanceof ArrayBuffer || data instanceof Uint8Array) {
|
||||
return data instanceof ArrayBuffer ? new Uint8Array(data) : data;
|
||||
} else {
|
||||
throw new Error("Binary data must be ArrayBuffer or Uint8Array");
|
||||
}
|
||||
} else {
|
||||
throw new Error(`Unknown type: ${type}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Helper: Publish message to NATS
|
||||
async function publishMessage(natsUrl, subject, message, correlationId) {
|
||||
const { connect } = require('nats');
|
||||
|
||||
try {
|
||||
const nc = await connect({ servers: [natsUrl] });
|
||||
await nc.publish(subject, message);
|
||||
logTrace(correlationId, `Message published to ${subject}`);
|
||||
nc.close();
|
||||
} catch (error) {
|
||||
logTrace(correlationId, `Failed to publish message: ${error.message}`);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
// SmartReceive for JavaScript - Handles both direct and link transport
|
||||
async function SmartReceive(msg, options = {}) {
|
||||
const {
|
||||
fileserverUrl = DEFAULT_FILESERVER_URL,
|
||||
maxRetries = 5,
|
||||
baseDelay = 100,
|
||||
maxDelay = 5000
|
||||
} = options;
|
||||
|
||||
const env = MessageEnvelope.fromJSON(msg.data);
|
||||
|
||||
logTrace(env.correlation_id, `Processing received message`);
|
||||
|
||||
if (env.transport === 'direct') {
|
||||
logTrace(env.correlation_id, `Direct transport - decoding payload`);
|
||||
|
||||
const payloadBytes = decode(env.payload);
|
||||
const data = _deserializeData(payloadBytes, env.type, env.correlation_id, env.metadata);
|
||||
|
||||
return { data, envelope: env };
|
||||
} else if (env.transport === 'link') {
|
||||
logTrace(env.correlation_id, `Link transport - fetching from URL`);
|
||||
|
||||
const data = await _fetchWithBackoff(env.url, maxRetries, baseDelay, maxDelay, env.correlation_id);
|
||||
const result = _deserializeData(data, env.type, env.correlation_id, env.metadata);
|
||||
|
||||
return { data: result, envelope: env };
|
||||
} else {
|
||||
throw new Error(`Unknown transport type: ${env.transport}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Helper: Fetch with exponential backoff
|
||||
async function _fetchWithBackoff(url, maxRetries, baseDelay, maxDelay, correlationId) {
|
||||
let delay = baseDelay;
|
||||
|
||||
for (let attempt = 1; attempt <= maxRetries; attempt++) {
|
||||
try {
|
||||
const response = await fetch(url);
|
||||
if (response.ok) {
|
||||
const buffer = await response.arrayBuffer();
|
||||
logTrace(correlationId, `Successfully fetched data from ${url} on attempt ${attempt}`);
|
||||
return new Uint8Array(buffer);
|
||||
} else {
|
||||
throw new Error(`Failed to fetch: ${response.status}`);
|
||||
}
|
||||
} catch (error) {
|
||||
logTrace(correlationId, `Attempt ${attempt} failed: ${error.message}`);
|
||||
|
||||
if (attempt < maxRetries) {
|
||||
await new Promise(resolve => setTimeout(resolve, delay));
|
||||
delay = Math.min(delay * 2, maxDelay);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
throw new Error(`Failed to fetch data after ${maxRetries} attempts`);
|
||||
}
|
||||
|
||||
// Helper: Deserialize data based on type
|
||||
async function _deserializeData(data, type, correlationId, metadata) {
|
||||
if (type === 'json') {
|
||||
const jsonStr = new TextDecoder().decode(data);
|
||||
// Helper: Deserialize bytes based on type
|
||||
function _deserialize_data(data, type, correlation_id) {
|
||||
/**
|
||||
* Deserialize bytes to data based on type
|
||||
*
|
||||
* Supported formats:
|
||||
* - "text": Converts bytes to string
|
||||
* - "dictionary": Parses JSON string
|
||||
* - "table": Parses Arrow IPC stream - NOT IMPLEMENTED (requires apache-arrow library)
|
||||
* - "image": Returns binary data
|
||||
* - "audio": Returns binary data
|
||||
* - "video": Returns binary data
|
||||
* - "binary": Returns binary data
|
||||
*/
|
||||
if (type === "text") {
|
||||
const decoder = new TextDecoder();
|
||||
return decoder.decode(data);
|
||||
} else if (type === "dictionary") {
|
||||
const decoder = new TextDecoder();
|
||||
const jsonStr = decoder.decode(data);
|
||||
return JSON.parse(jsonStr);
|
||||
} else if (type === 'table') {
|
||||
// Deserialize Arrow IPC stream to Table
|
||||
const table = Arrow.Table.from(data);
|
||||
return table;
|
||||
} else if (type === 'binary') {
|
||||
// Return binary binary data
|
||||
} else if (type === "table") {
|
||||
// Table data - deserialize Arrow IPC stream (NOT IMPLEMENTED in pure JavaScript)
|
||||
throw new Error("Table deserialization requires apache-arrow library");
|
||||
} else if (type === "image") {
|
||||
return data;
|
||||
} else if (type === "audio") {
|
||||
return data;
|
||||
} else if (type === "video") {
|
||||
return data;
|
||||
} else if (type === "binary") {
|
||||
return data;
|
||||
} else {
|
||||
throw new Error(`Unknown type: ${type}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Export functions
|
||||
module.exports = {
|
||||
SmartSend,
|
||||
SmartReceive,
|
||||
MessageEnvelope
|
||||
};
|
||||
// Helper: Upload data to file server
|
||||
// Internal wrapper that adds correlation_id logging for smartsend
|
||||
async function _upload_to_fileserver(fileserver_url, dataname, data, correlation_id) {
|
||||
/**
|
||||
* Internal upload helper - wraps plik_oneshot_upload to add correlation_id logging
|
||||
* This allows smartsend to pass correlation_id for tracing without changing the handler signature
|
||||
*/
|
||||
log_trace(correlation_id, `Uploading ${dataname} to fileserver: ${fileserver_url}`);
|
||||
const result = await plik_oneshot_upload(fileserver_url, dataname, data);
|
||||
log_trace(correlation_id, `Uploaded to URL: ${result.url}`);
|
||||
return result;
|
||||
}
|
||||
|
||||
// Helper: Fetch data from URL with exponential backoff
|
||||
async function _fetch_with_backoff(url, max_retries, base_delay, max_delay, correlation_id) {
|
||||
/**
|
||||
* Fetch data from URL with retry logic using exponential backoff
|
||||
*/
|
||||
let delay = base_delay;
|
||||
|
||||
for (let attempt = 1; attempt <= max_retries; attempt++) {
|
||||
try {
|
||||
const response = await fetch(url);
|
||||
|
||||
if (response.status === 200) {
|
||||
log_trace(correlation_id, `Successfully fetched data from ${url} on attempt ${attempt}`);
|
||||
const arrayBuffer = await response.arrayBuffer();
|
||||
return new Uint8Array(arrayBuffer);
|
||||
} else {
|
||||
throw new Error(`Failed to fetch: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
} catch (e) {
|
||||
log_trace(correlation_id, `Attempt ${attempt} failed: ${e.message}`);
|
||||
|
||||
if (attempt < max_retries) {
|
||||
// Sleep with exponential backoff
|
||||
await new Promise(resolve => setTimeout(resolve, delay));
|
||||
delay = Math.min(delay * 2, max_delay);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
throw new Error(`Failed to fetch data after ${max_retries} attempts`);
|
||||
}
|
||||
|
||||
// Helper: Get payload bytes from data
|
||||
function _get_payload_bytes(data) {
|
||||
if (data instanceof ArrayBuffer || data instanceof Uint8Array) {
|
||||
return data instanceof ArrayBuffer ? new Uint8Array(data) : data;
|
||||
} else if (typeof data === 'string') {
|
||||
return new TextEncoder().encode(data);
|
||||
} else {
|
||||
// For objects, serialize to JSON
|
||||
return new TextEncoder().encode(JSON.stringify(data));
|
||||
}
|
||||
}
|
||||
|
||||
// MessagePayload class - matches msg_payload_v1 Julia struct
|
||||
class MessagePayload {
|
||||
/**
|
||||
* Represents a single payload in the message envelope
|
||||
* Matches Julia's msg_payload_v1 struct
|
||||
*
|
||||
* @param {Object} options - Payload options
|
||||
* @param {string} options.id - ID of this payload (e.g., "uuid4")
|
||||
* @param {string} options.dataname - Name of this payload (e.g., "login_image")
|
||||
* @param {string} options.payload_type - Payload type: "text", "dictionary", "table", "image", "audio", "video", "binary"
|
||||
* @param {string} options.transport - "direct" or "link"
|
||||
* @param {string} options.encoding - "none", "json", "base64", "arrow-ipc"
|
||||
* @param {number} options.size - Data size in bytes
|
||||
* @param {string|Uint8Array} options.data - Payload data (Uint8Array for direct, URL string for link)
|
||||
* @param {Object} options.metadata - Metadata for this payload
|
||||
*/
|
||||
constructor(options) {
|
||||
this.id = options.id || uuid4();
|
||||
this.dataname = options.dataname;
|
||||
this.payload_type = options.payload_type;
|
||||
this.transport = options.transport;
|
||||
this.encoding = options.encoding;
|
||||
this.size = options.size;
|
||||
this.data = options.data;
|
||||
this.metadata = options.metadata || {};
|
||||
}
|
||||
|
||||
// Convert to JSON object - uses snake_case to match Julia API
|
||||
toJSON() {
|
||||
const obj = {
|
||||
id: this.id,
|
||||
dataname: this.dataname,
|
||||
payload_type: this.payload_type,
|
||||
transport: this.transport,
|
||||
encoding: this.encoding,
|
||||
size: this.size
|
||||
};
|
||||
|
||||
// Include data based on transport type
|
||||
if (this.transport === "direct" && this.data !== null && this.data !== undefined) {
|
||||
if (this.encoding === "base64" || this.encoding === "json") {
|
||||
obj.data = this.data;
|
||||
} else {
|
||||
// For other encodings, use base64
|
||||
const payloadBytes = _get_payload_bytes(this.data);
|
||||
obj.data = uint8ArrayToBase64(payloadBytes);
|
||||
}
|
||||
} else if (this.transport === "link" && this.data !== null && this.data !== undefined) {
|
||||
// For link transport, data is a URL string
|
||||
obj.data = this.data;
|
||||
}
|
||||
|
||||
if (Object.keys(this.metadata).length > 0) {
|
||||
obj.metadata = this.metadata;
|
||||
}
|
||||
|
||||
return obj;
|
||||
}
|
||||
}
|
||||
|
||||
// MessageEnvelope class - matches msg_envelope_v1 Julia struct
|
||||
class MessageEnvelope {
|
||||
/**
|
||||
* Represents the message envelope containing metadata and payloads
|
||||
* Matches Julia's msg_envelope_v1 struct
|
||||
*
|
||||
* @param {Object} options - Envelope options
|
||||
* @param {string} options.correlation_id - Unique identifier to track messages
|
||||
* @param {string} options.msg_id - This message id
|
||||
* @param {string} options.timestamp - Message published timestamp
|
||||
* @param {string} options.send_to - Topic/subject the sender sends to
|
||||
* @param {string} options.msg_purpose - Purpose of this message
|
||||
* @param {string} options.sender_name - Name of the sender
|
||||
* @param {string} options.sender_id - UUID of the sender
|
||||
* @param {string} options.receiver_name - Name of the receiver
|
||||
* @param {string} options.receiver_id - UUID of the receiver
|
||||
* @param {string} options.reply_to - Topic to reply to
|
||||
* @param {string} options.reply_to_msg_id - Message id this message is replying to
|
||||
* @param {string} options.broker_url - NATS server address
|
||||
* @param {Object} options.metadata - Metadata for the envelope
|
||||
* @param {Array<MessagePayload>} options.payloads - Array of payloads
|
||||
*/
|
||||
constructor(options) {
|
||||
this.correlation_id = options.correlation_id || uuid4();
|
||||
this.msg_id = options.msg_id || uuid4();
|
||||
this.timestamp = options.timestamp || new Date().toISOString();
|
||||
this.send_to = options.send_to;
|
||||
this.msg_purpose = options.msg_purpose || "";
|
||||
this.sender_name = options.sender_name || "";
|
||||
this.sender_id = options.sender_id || uuid4();
|
||||
this.receiver_name = options.receiver_name || "";
|
||||
this.receiver_id = options.receiver_id || "";
|
||||
this.reply_to = options.reply_to || "";
|
||||
this.reply_to_msg_id = options.reply_to_msg_id || "";
|
||||
this.broker_url = options.broker_url || DEFAULT_NATS_URL;
|
||||
this.metadata = options.metadata || {};
|
||||
this.payloads = options.payloads || [];
|
||||
}
|
||||
|
||||
// Convert to JSON object - uses snake_case to match Julia API
|
||||
toJSON() {
|
||||
const obj = {
|
||||
correlation_id: this.correlation_id,
|
||||
msg_id: this.msg_id,
|
||||
timestamp: this.timestamp,
|
||||
send_to: this.send_to,
|
||||
msg_purpose: this.msg_purpose,
|
||||
sender_name: this.sender_name,
|
||||
sender_id: this.sender_id,
|
||||
receiver_name: this.receiver_name,
|
||||
receiver_id: this.receiver_id,
|
||||
reply_to: this.reply_to,
|
||||
reply_to_msg_id: this.reply_to_msg_id,
|
||||
broker_url: this.broker_url
|
||||
};
|
||||
|
||||
if (Object.keys(this.metadata).length > 0) {
|
||||
obj.metadata = this.metadata;
|
||||
}
|
||||
|
||||
if (this.payloads.length > 0) {
|
||||
obj.payloads = this.payloads.map(p => p.toJSON());
|
||||
}
|
||||
|
||||
return obj;
|
||||
}
|
||||
|
||||
// Convert to JSON string
|
||||
toString() {
|
||||
return JSON.stringify(this.toJSON());
|
||||
}
|
||||
}
|
||||
|
||||
// SmartSend function - matches Julia smartsend signature and behavior
|
||||
async function smartsend(subject, data, options = {}) {
|
||||
/**
|
||||
* Send data either directly via NATS or via a fileserver URL, depending on payload size
|
||||
*
|
||||
* This function intelligently routes data delivery based on payload size relative to a threshold.
|
||||
* If the serialized payload is smaller than `size_threshold`, it encodes the data as Base64 and publishes directly over NATS.
|
||||
* Otherwise, it uploads the data to a fileserver and publishes only the download URL over NATS.
|
||||
*
|
||||
* @param {string} subject - NATS subject to publish the message to
|
||||
* @param {Array} data - List of {dataname, data, type} objects to send (must be a list, even for single payload)
|
||||
* @param {Object} options - Additional options
|
||||
* @param {string} options.broker_url - URL of the NATS server (default: "nats://localhost:4222")
|
||||
* @param {string} options.fileserver_url - Base URL of the file server (default: "http://localhost:8080")
|
||||
* @param {Function} options.fileserver_upload_handler - Function to handle fileserver uploads
|
||||
* @param {number} options.size_threshold - Threshold in bytes separating direct vs link transport (default: 1MB)
|
||||
* @param {string} options.correlation_id - Optional correlation ID for tracing
|
||||
* @param {string} options.msg_purpose - Purpose of the message (default: "chat")
|
||||
* @param {string} options.sender_name - Name of the sender (default: "NATSBridge")
|
||||
* @param {string} options.receiver_name - Name of the receiver (default: "")
|
||||
* @param {string} options.receiver_id - UUID of the receiver (default: "")
|
||||
* @param {string} options.reply_to - Topic to reply to (default: "")
|
||||
* @param {string} options.reply_to_msg_id - Message ID this message is replying to (default: "")
|
||||
* @param {boolean} options.is_publish - Whether to automatically publish the message to NATS (default: true)
|
||||
* - When true: Message is published to NATS automatically
|
||||
* - When false: Returns (env, env_json_str) without publishing, allowing manual publishing
|
||||
* @returns {Promise<Object>} - A tuple-like object with { env: MessageEnvelope, env_json_str: string }
|
||||
* - env: MessageEnvelope object with all metadata and payloads
|
||||
* - env_json_str: JSON string representation of the envelope for manual publishing
|
||||
*/
|
||||
const {
|
||||
broker_url = DEFAULT_NATS_URL,
|
||||
fileserver_url = DEFAULT_FILESERVER_URL,
|
||||
fileserver_upload_handler = _upload_to_fileserver,
|
||||
size_threshold = DEFAULT_SIZE_THRESHOLD,
|
||||
correlation_id = uuid4(),
|
||||
msg_purpose = "chat",
|
||||
sender_name = "NATSBridge",
|
||||
receiver_name = "",
|
||||
receiver_id = "",
|
||||
reply_to = "",
|
||||
reply_to_msg_id = "",
|
||||
is_publish = true // Whether to automatically publish the message to NATS
|
||||
} = options;
|
||||
|
||||
log_trace(correlation_id, `Starting smartsend for subject: ${subject}`);
|
||||
|
||||
// Generate message metadata
|
||||
const msg_id = uuid4();
|
||||
|
||||
// Process each payload in the list
|
||||
const payloads = [];
|
||||
|
||||
for (const payload of data) {
|
||||
const dataname = payload.dataname;
|
||||
const payloadData = payload.data;
|
||||
const payloadType = payload.type;
|
||||
|
||||
// Serialize data based on type
|
||||
const payloadBytes = _serialize_data(payloadData, payloadType);
|
||||
const payloadSize = payloadBytes.byteLength;
|
||||
|
||||
log_trace(correlation_id, `Serialized payload '${dataname}' (payload_type: ${payloadType}) size: ${payloadSize} bytes`);
|
||||
|
||||
// Decision: Direct vs Link
|
||||
if (payloadSize < size_threshold) {
|
||||
// Direct path - Base64 encode and send via NATS
|
||||
const payloadB64 = uint8ArrayToBase64(payloadBytes);
|
||||
log_trace(correlation_id, `Using direct transport for ${payloadSize} bytes`);
|
||||
|
||||
// Create MessagePayload for direct transport
|
||||
const payloadObj = new MessagePayload({
|
||||
dataname: dataname,
|
||||
payload_type: payloadType,
|
||||
transport: "direct",
|
||||
encoding: "base64",
|
||||
size: payloadSize,
|
||||
data: payloadB64,
|
||||
metadata: { payload_bytes: payloadSize }
|
||||
});
|
||||
payloads.push(payloadObj);
|
||||
} else {
|
||||
// Link path - Upload to HTTP server, send URL via NATS
|
||||
log_trace(correlation_id, `Using link transport, uploading to fileserver`);
|
||||
|
||||
// Upload to HTTP server using plik_oneshot_upload handler
|
||||
const response = await fileserver_upload_handler(fileserver_url, dataname, payloadBytes);
|
||||
|
||||
if (response.status !== 200) {
|
||||
throw new Error(`Failed to upload data to fileserver: ${response.status}`);
|
||||
}
|
||||
|
||||
const url = response.url;
|
||||
log_trace(correlation_id, `Uploaded to URL: ${url}`);
|
||||
|
||||
// Create MessagePayload for link transport
|
||||
const payloadObj = new MessagePayload({
|
||||
dataname: dataname,
|
||||
payload_type: payloadType,
|
||||
transport: "link",
|
||||
encoding: "none",
|
||||
size: payloadSize,
|
||||
data: url,
|
||||
metadata: {}
|
||||
});
|
||||
payloads.push(payloadObj);
|
||||
}
|
||||
}
|
||||
|
||||
// Create MessageEnvelope with all payloads
|
||||
const env = new MessageEnvelope({
|
||||
correlation_id: correlation_id,
|
||||
msg_id: msg_id,
|
||||
send_to: subject,
|
||||
msg_purpose: msg_purpose,
|
||||
sender_name: sender_name,
|
||||
receiver_name: receiver_name,
|
||||
receiver_id: receiver_id,
|
||||
reply_to: reply_to,
|
||||
reply_to_msg_id: reply_to_msg_id,
|
||||
broker_url: broker_url,
|
||||
payloads: payloads
|
||||
});
|
||||
|
||||
// Convert envelope to JSON string
|
||||
const env_json_str = env.toString();
|
||||
|
||||
// Publish to NATS if isPublish is true
|
||||
if (is_publish) {
|
||||
await publish_message(broker_url, subject, env_json_str, correlation_id);
|
||||
}
|
||||
|
||||
// Return both envelope and JSON string (tuple-like structure, matching Julia API)
|
||||
return {
|
||||
env: env,
|
||||
env_json_str: env_json_str
|
||||
};
|
||||
}
|
||||
|
||||
// Helper: Publish message to NATS
|
||||
async function publish_message(broker_url, subject, message, correlation_id) {
|
||||
/**
|
||||
* Publish a message to a NATS subject with proper connection management
|
||||
*
|
||||
* @param {string} broker_url - NATS server URL
|
||||
* @param {string} subject - NATS subject to publish to
|
||||
* @param {string} message - JSON message to publish
|
||||
* @param {string} correlation_id - Correlation ID for logging
|
||||
*/
|
||||
log_trace(correlation_id, `Publishing message to ${subject}`);
|
||||
|
||||
// For Node.js, we would use nats.js library
|
||||
// This is a placeholder that throws an error
|
||||
// In production, you would import and use the actual nats library
|
||||
|
||||
// Example with nats.js:
|
||||
// import { connect } from 'nats';
|
||||
// const nc = await connect({ servers: [broker_url] });
|
||||
// await nc.publish(subject, message);
|
||||
// nc.close();
|
||||
|
||||
// For now, just log the message
|
||||
console.log(`[NATS PUBLISH] Subject: ${subject}, Message: ${message.substring(0, 100)}...`);
|
||||
}
|
||||
|
||||
// SmartReceive function - matches Julia smartreceive signature and behavior
|
||||
async function smartreceive(msg, options = {}) {
|
||||
/**
|
||||
* Receive and process messages from NATS
|
||||
*
|
||||
* This function processes incoming NATS messages, handling both direct transport
|
||||
* (base64 decoded payloads) and link transport (URL-based payloads).
|
||||
*
|
||||
* @param {Object} msg - NATS message object with payload property
|
||||
* @param {Object} options - Additional options
|
||||
* @param {Function} options.fileserver_download_handler - Function to handle downloading data from file server URLs
|
||||
* @param {number} options.max_retries - Maximum retry attempts for fetching URL (default: 5)
|
||||
* @param {number} options.base_delay - Initial delay for exponential backoff in ms (default: 100)
|
||||
* @param {number} options.max_delay - Maximum delay for exponential backoff in ms (default: 5000)
|
||||
*
|
||||
* @returns {Promise<Object>} - JSON object of envelope with payloads field containing list of {dataname, data, type} tuples
|
||||
*/
|
||||
const {
|
||||
fileserver_download_handler = _fetch_with_backoff,
|
||||
max_retries = 5,
|
||||
base_delay = 100,
|
||||
max_delay = 5000
|
||||
} = options;
|
||||
|
||||
// Parse the JSON envelope
|
||||
const jsonStr = typeof msg.payload === 'string' ? msg.payload : new TextDecoder().decode(msg.payload);
|
||||
const json_data = JSON.parse(jsonStr);
|
||||
|
||||
log_trace(json_data.correlation_id, `Processing received message`);
|
||||
|
||||
// Process all payloads in the envelope
|
||||
const payloads_list = [];
|
||||
|
||||
// Get number of payloads
|
||||
const num_payloads = json_data.payloads ? json_data.payloads.length : 0;
|
||||
|
||||
for (let i = 0; i < num_payloads; i++) {
|
||||
const payload = json_data.payloads[i];
|
||||
const transport = payload.transport;
|
||||
const dataname = payload.dataname;
|
||||
|
||||
if (transport === "direct") {
|
||||
// Direct transport - payload is in the message
|
||||
log_trace(json_data.correlation_id, `Direct transport - decoding payload '${dataname}'`);
|
||||
|
||||
// Extract base64 payload from the payload
|
||||
const payload_b64 = payload.data;
|
||||
|
||||
// Decode Base64 payload
|
||||
const payload_bytes = base64ToUint8Array(payload_b64);
|
||||
|
||||
// Deserialize based on type
|
||||
const data_type = payload.payload_type;
|
||||
const data = _deserialize_data(payload_bytes, data_type, json_data.correlation_id);
|
||||
|
||||
payloads_list.push({ dataname, data, type: data_type });
|
||||
} else if (transport === "link") {
|
||||
// Link transport - payload is at URL
|
||||
const url = payload.data;
|
||||
log_trace(json_data.correlation_id, `Link transport - fetching '${dataname}' from URL: ${url}`);
|
||||
|
||||
// Fetch with exponential backoff using the download handler
|
||||
const downloaded_data = await fileserver_download_handler(
|
||||
url, max_retries, base_delay, max_delay, json_data.correlation_id
|
||||
);
|
||||
|
||||
// Deserialize based on type
|
||||
const data_type = payload.payload_type;
|
||||
const data = _deserialize_data(downloaded_data, data_type, json_data.correlation_id);
|
||||
|
||||
payloads_list.push({ dataname, data, type: data_type });
|
||||
} else {
|
||||
throw new Error(`Unknown transport type for payload '${dataname}': ${transport}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Replace payloads array with the processed list of {dataname, data, type} tuples
|
||||
// This matches Julia's smartreceive return format
|
||||
json_data.payloads = payloads_list;
|
||||
|
||||
return json_data;
|
||||
}
|
||||
|
||||
// plik_oneshot_upload - matches Julia plik_oneshot_upload function
|
||||
// Upload handler signature: plik_oneshot_upload(fileserver_url, dataname, data)
|
||||
// Returns: { status, uploadid, fileid, url }
|
||||
async function plik_oneshot_upload(file_server_url, dataname, data) {
|
||||
/**
|
||||
* Upload a single file to a plik server using one-shot mode
|
||||
* This function uploads raw byte array to a plik server in one-shot mode (no upload session).
|
||||
* It first creates a one-shot upload session by sending a POST request with {"OneShot": true},
|
||||
* retrieves an upload ID and token, then uploads the file data as multipart form data using the token.
|
||||
*
|
||||
* This is the default upload handler used by smartsend.
|
||||
* Custom handlers can be passed via the fileserver_upload_handler option.
|
||||
*
|
||||
* @param {string} file_server_url - Base URL of the plik server (e.g., "http://localhost:8080")
|
||||
* @param {string} dataname - Name of the file being uploaded
|
||||
* @param {Uint8Array} data - Raw byte data of the file content
|
||||
* @returns {Promise<Object>} - Dictionary with keys: status, uploadid, fileid, url
|
||||
*/
|
||||
|
||||
// Step 1: Get upload ID and token
|
||||
const url_getUploadID = `${file_server_url}/upload`;
|
||||
const headers = { "Content-Type": "application/json" };
|
||||
const body = JSON.stringify({ OneShot: true });
|
||||
|
||||
let http_response = await fetch(url_getUploadID, {
|
||||
method: "POST",
|
||||
headers: headers,
|
||||
body: body
|
||||
});
|
||||
|
||||
const response_json = await http_response.json();
|
||||
const uploadid = response_json.id;
|
||||
const uploadtoken = response_json.uploadToken;
|
||||
|
||||
// Step 2: Upload file data
|
||||
const url_upload = `${file_server_url}/file/${uploadid}`;
|
||||
|
||||
// Create multipart form data
|
||||
const formData = new FormData();
|
||||
const blob = new Blob([data], { type: "application/octet-stream" });
|
||||
formData.append("file", blob, dataname);
|
||||
|
||||
http_response = await fetch(url_upload, {
|
||||
method: "POST",
|
||||
headers: { "X-UploadToken": uploadtoken },
|
||||
body: formData
|
||||
});
|
||||
|
||||
const fileResponseJson = await http_response.json();
|
||||
const fileid = fileResponseJson.id;
|
||||
|
||||
// URL of the uploaded data e.g. "http://192.168.1.20:8080/file/3F62E/4AgGT/test.zip"
|
||||
const url = `${file_server_url}/file/${uploadid}/${fileid}/${encodeURIComponent(dataname)}`;
|
||||
|
||||
return {
|
||||
status: http_response.status,
|
||||
uploadid: uploadid,
|
||||
fileid: fileid,
|
||||
url: url
|
||||
};
|
||||
}
|
||||
|
||||
// Export for Node.js
|
||||
if (typeof module !== 'undefined' && module.exports) {
|
||||
module.exports = {
|
||||
MessageEnvelope,
|
||||
MessagePayload,
|
||||
smartsend,
|
||||
smartreceive,
|
||||
_serialize_data,
|
||||
_deserialize_data,
|
||||
_fetch_with_backoff,
|
||||
_upload_to_fileserver,
|
||||
plik_oneshot_upload,
|
||||
DEFAULT_SIZE_THRESHOLD,
|
||||
DEFAULT_NATS_URL,
|
||||
DEFAULT_FILESERVER_URL,
|
||||
uuid4,
|
||||
log_trace
|
||||
};
|
||||
}
|
||||
|
||||
// Export for browser
|
||||
if (typeof window !== 'undefined') {
|
||||
window.NATSBridge = {
|
||||
MessageEnvelope,
|
||||
MessagePayload,
|
||||
smartsend,
|
||||
smartreceive,
|
||||
_serialize_data,
|
||||
_deserialize_data,
|
||||
_fetch_with_backoff,
|
||||
_upload_to_fileserver,
|
||||
plik_oneshot_upload,
|
||||
DEFAULT_SIZE_THRESHOLD,
|
||||
DEFAULT_NATS_URL,
|
||||
DEFAULT_FILESERVER_URL,
|
||||
uuid4,
|
||||
log_trace
|
||||
};
|
||||
}
|
||||
830
src/nats_bridge.py
Normal file
830
src/nats_bridge.py
Normal file
@@ -0,0 +1,830 @@
|
||||
"""
|
||||
Python NATS Bridge - Bi-Directional Data Bridge
|
||||
|
||||
This module provides functionality for sending and receiving data over NATS
|
||||
using the Claim-Check pattern for large payloads.
|
||||
|
||||
Supported types: "text", "dictionary", "table", "image", "audio", "video", "binary"
|
||||
|
||||
Multi-Payload Support (Standard API):
|
||||
The system uses a standardized list-of-tuples format for all payload operations.
|
||||
Even when sending a single payload, the user must wrap it in a list.
|
||||
|
||||
API Standard:
|
||||
# Input format for smartsend (always a list of tuples with type info)
|
||||
[(dataname1, data1, type1), (dataname2, data2, type2), ...]
|
||||
|
||||
# Output format for smartreceive (always returns a list of tuples)
|
||||
[(dataname1, data1, type1), (dataname2, data2, type2), ...]
|
||||
"""
|
||||
|
||||
import json
|
||||
import time
|
||||
import uuid
|
||||
|
||||
# Constants
|
||||
DEFAULT_SIZE_THRESHOLD = 1000000 # 1MB - threshold for switching from direct to link transport
|
||||
DEFAULT_BROKER_URL = "nats://localhost:4222"
|
||||
DEFAULT_FILESERVER_URL = "http://localhost:8080"
|
||||
|
||||
# ============================================= 100 ============================================== #
|
||||
|
||||
|
||||
class MessagePayload:
|
||||
"""Internal message payload structure representing a single payload within a NATS message envelope.
|
||||
|
||||
This structure represents a single payload within a NATS message envelope.
|
||||
It supports both direct transport (base64-encoded data) and link transport (URL-based).
|
||||
|
||||
Attributes:
|
||||
id: Unique identifier for this payload (e.g., "uuid4")
|
||||
dataname: Name of the payload (e.g., "login_image")
|
||||
payload_type: Payload type ("text", "dictionary", "table", "image", "audio", "video", "binary")
|
||||
transport: Transport method ("direct" or "link")
|
||||
encoding: Encoding method ("none", "json", "base64", "arrow-ipc")
|
||||
size: Size of the payload in bytes
|
||||
data: Payload data (bytes for direct, URL for link)
|
||||
metadata: Optional metadata dictionary
|
||||
"""
|
||||
|
||||
def __init__(self, data, payload_type, id="", dataname="", transport="direct",
|
||||
encoding="none", size=0, metadata=None):
|
||||
"""
|
||||
Initialize a MessagePayload.
|
||||
|
||||
Args:
|
||||
data: Payload data (base64 string for direct, URL string for link)
|
||||
payload_type: Payload type ("text", "dictionary", "table", "image", "audio", "video", "binary")
|
||||
id: Unique identifier for this payload (auto-generated if empty)
|
||||
dataname: Name of the payload (auto-generated UUID if empty)
|
||||
transport: Transport method ("direct" or "link")
|
||||
encoding: Encoding method ("none", "json", "base64", "arrow-ipc")
|
||||
size: Size of the payload in bytes
|
||||
metadata: Optional metadata dictionary
|
||||
"""
|
||||
self.id = id if id else self._generate_uuid()
|
||||
self.dataname = dataname if dataname else self._generate_uuid()
|
||||
self.payload_type = payload_type
|
||||
self.transport = transport
|
||||
self.encoding = encoding
|
||||
self.size = size
|
||||
self.data = data
|
||||
self.metadata = metadata if metadata else {}
|
||||
|
||||
def _generate_uuid(self):
|
||||
"""Generate a UUID string."""
|
||||
return str(uuid.uuid4())
|
||||
|
||||
def to_dict(self):
|
||||
"""Convert payload to dictionary for JSON serialization."""
|
||||
payload_dict = {
|
||||
"id": self.id,
|
||||
"dataname": self.dataname,
|
||||
"payload_type": self.payload_type,
|
||||
"transport": self.transport,
|
||||
"encoding": self.encoding,
|
||||
"size": self.size,
|
||||
}
|
||||
|
||||
# Include data based on transport type
|
||||
if self.transport == "direct" and self.data is not None:
|
||||
if self.encoding == "base64" or self.encoding == "json":
|
||||
payload_dict["data"] = self.data
|
||||
else:
|
||||
# For other encodings, use base64
|
||||
payload_dict["data"] = self._to_base64(self.data)
|
||||
elif self.transport == "link" and self.data is not None:
|
||||
# For link transport, data is a URL string
|
||||
payload_dict["data"] = self.data
|
||||
|
||||
if self.metadata:
|
||||
payload_dict["metadata"] = self.metadata
|
||||
|
||||
return payload_dict
|
||||
|
||||
def _to_base64(self, data):
|
||||
"""Convert bytes to base64 string."""
|
||||
if isinstance(data, bytes):
|
||||
# Simple base64 encoding without library
|
||||
import ubinascii
|
||||
return ubinascii.b2a_base64(data).decode('utf-8').strip()
|
||||
return data
|
||||
|
||||
def _from_base64(self, data):
|
||||
"""Convert base64 string to bytes."""
|
||||
import ubinascii
|
||||
return ubinascii.a2b_base64(data)
|
||||
|
||||
|
||||
class MessageEnvelope:
|
||||
"""Internal message envelope structure containing multiple payloads with metadata."""
|
||||
|
||||
def __init__(self, send_to, payloads, correlation_id="", msg_id="", timestamp="",
|
||||
msg_purpose="", sender_name="", sender_id="", receiver_name="",
|
||||
receiver_id="", reply_to="", reply_to_msg_id="", broker_url=DEFAULT_NATS_URL,
|
||||
metadata=None):
|
||||
"""
|
||||
Initialize a MessageEnvelope.
|
||||
|
||||
Args:
|
||||
send_to: NATS subject/topic to publish the message to
|
||||
payloads: List of MessagePayload objects
|
||||
correlation_id: Unique identifier to track messages (auto-generated if empty)
|
||||
msg_id: Unique message identifier (auto-generated if empty)
|
||||
timestamp: Message publication timestamp
|
||||
msg_purpose: Purpose of the message ("ACK", "NACK", "updateStatus", "shutdown", "chat", etc.)
|
||||
sender_name: Name of the sender
|
||||
sender_id: UUID of the sender
|
||||
receiver_name: Name of the receiver (empty means broadcast)
|
||||
receiver_id: UUID of the receiver (empty means broadcast)
|
||||
reply_to: Topic where receiver should reply
|
||||
reply_to_msg_id: Message ID this message is replying to
|
||||
broker_url: NATS broker URL
|
||||
metadata: Optional message-level metadata
|
||||
"""
|
||||
self.correlation_id = correlation_id if correlation_id else self._generate_uuid()
|
||||
self.msg_id = msg_id if msg_id else self._generate_uuid()
|
||||
self.timestamp = timestamp if timestamp else self._get_timestamp()
|
||||
self.send_to = send_to
|
||||
self.msg_purpose = msg_purpose
|
||||
self.sender_name = sender_name
|
||||
self.sender_id = sender_id if sender_id else self._generate_uuid()
|
||||
self.receiver_name = receiver_name
|
||||
self.receiver_id = receiver_id if receiver_id else self._generate_uuid()
|
||||
self.reply_to = reply_to
|
||||
self.reply_to_msg_id = reply_to_msg_id
|
||||
self.broker_url = broker_url
|
||||
self.metadata = metadata if metadata else {}
|
||||
self.payloads = payloads
|
||||
|
||||
def _generate_uuid(self):
|
||||
"""Generate a UUID string."""
|
||||
return str(uuid.uuid4())
|
||||
|
||||
def _get_timestamp(self):
|
||||
"""Get current timestamp in ISO format."""
|
||||
# Simplified timestamp - Micropython may not have full datetime
|
||||
return "2026-02-21T" + time.strftime("%H:%M:%S", time.localtime())
|
||||
|
||||
def to_json(self):
|
||||
"""Convert envelope to JSON string.
|
||||
|
||||
Returns:
|
||||
str: JSON string representation of the envelope using snake_case field names
|
||||
"""
|
||||
obj = {
|
||||
"correlation_id": self.correlation_id,
|
||||
"msg_id": self.msg_id,
|
||||
"timestamp": self.timestamp,
|
||||
"send_to": self.send_to,
|
||||
"msg_purpose": self.msg_purpose,
|
||||
"sender_name": self.sender_name,
|
||||
"sender_id": self.sender_id,
|
||||
"receiver_name": self.receiver_name,
|
||||
"receiver_id": self.receiver_id,
|
||||
"reply_to": self.reply_to,
|
||||
"reply_to_msg_id": self.reply_to_msg_id,
|
||||
"broker_url": self.broker_url
|
||||
}
|
||||
|
||||
# Include metadata if not empty
|
||||
if self.metadata:
|
||||
obj["metadata"] = self.metadata
|
||||
|
||||
# Convert payloads to JSON array
|
||||
if self.payloads:
|
||||
payloads_json = []
|
||||
for payload in self.payloads:
|
||||
payloads_json.append(payload.to_dict())
|
||||
obj["payloads"] = payloads_json
|
||||
|
||||
return json.dumps(obj)
|
||||
|
||||
|
||||
def log_trace(correlation_id, message):
|
||||
"""Log a trace message with correlation ID and timestamp."""
|
||||
timestamp = time.strftime("%Y-%m-%dT%H:%M:%S", time.localtime())
|
||||
print("[{}] [Correlation: {}] {}".format(timestamp, correlation_id, message))
|
||||
|
||||
|
||||
def _serialize_data(data, payload_type):
|
||||
"""Serialize data according to specified format.
|
||||
|
||||
This function serializes arbitrary data into a binary representation based on the specified type.
|
||||
It supports multiple serialization formats for different data types.
|
||||
|
||||
Args:
|
||||
data: Data to serialize
|
||||
- "text": String
|
||||
- "dictionary": JSON-serializable dict
|
||||
- "table": Tabular data (pandas DataFrame or list of dicts)
|
||||
- "image", "audio", "video", "binary": bytes
|
||||
payload_type: Target format ("text", "dictionary", "table", "image", "audio", "video", "binary")
|
||||
|
||||
Returns:
|
||||
bytes: Binary representation of the serialized data
|
||||
|
||||
Example:
|
||||
>>> text_bytes = _serialize_data("Hello World", "text")
|
||||
>>> json_bytes = _serialize_data({"key": "value"}, "dictionary")
|
||||
>>> table_bytes = _serialize_data([{"id": 1, "name": "Alice"}], "table")
|
||||
"""
|
||||
if payload_type == "text":
|
||||
if isinstance(data, str):
|
||||
return data.encode('utf-8')
|
||||
else:
|
||||
raise ValueError("Text data must be a string")
|
||||
|
||||
elif payload_type == "dictionary":
|
||||
if isinstance(data, dict):
|
||||
json_str = json.dumps(data)
|
||||
return json_str.encode('utf-8')
|
||||
else:
|
||||
raise ValueError("Dictionary data must be a dict")
|
||||
|
||||
elif payload_type == "table":
|
||||
# Support pandas DataFrame or list of dicts
|
||||
try:
|
||||
import pandas as pd
|
||||
if isinstance(data, pd.DataFrame):
|
||||
# Convert DataFrame to JSON and then to bytes
|
||||
json_str = data.to_json(orient='records', force_ascii=False)
|
||||
return json_str.encode('utf-8')
|
||||
elif isinstance(data, list) and len(data) > 0 and isinstance(data[0], dict):
|
||||
# List of dicts
|
||||
json_str = json.dumps(data)
|
||||
return json_str.encode('utf-8')
|
||||
else:
|
||||
raise ValueError("Table data must be a pandas DataFrame or list of dicts")
|
||||
except ImportError:
|
||||
# Fallback: if pandas not available, treat as list of dicts
|
||||
if isinstance(data, list):
|
||||
json_str = json.dumps(data)
|
||||
return json_str.encode('utf-8')
|
||||
else:
|
||||
raise ValueError("Table data requires pandas DataFrame or list of dicts (pandas not available)")
|
||||
|
||||
elif payload_type in ("image", "audio", "video", "binary"):
|
||||
if isinstance(data, bytes):
|
||||
return data
|
||||
else:
|
||||
raise ValueError("{} data must be bytes".format(payload_type.capitalize()))
|
||||
|
||||
else:
|
||||
raise ValueError("Unknown payload_type: {}".format(payload_type))
|
||||
|
||||
|
||||
def _deserialize_data(data_bytes, payload_type, correlation_id):
|
||||
"""Deserialize bytes to data based on type.
|
||||
|
||||
This function converts serialized bytes back to Python data based on type.
|
||||
It handles "text" (string), "dictionary" (JSON deserialization), "table" (JSON deserialization),
|
||||
"image" (binary data), "audio" (binary data), "video" (binary data), and "binary" (binary data).
|
||||
|
||||
Args:
|
||||
data_bytes: Serialized data as bytes
|
||||
payload_type: Data type ("text", "dictionary", "table", "image", "audio", "video", "binary")
|
||||
correlation_id: Correlation ID for logging
|
||||
|
||||
Returns:
|
||||
Deserialized data:
|
||||
- "text": str
|
||||
- "dictionary": dict
|
||||
- "table": list of dicts (or pandas DataFrame if available)
|
||||
- "image", "audio", "video", "binary": bytes
|
||||
|
||||
Example:
|
||||
>>> text_data = _deserialize_data(b"Hello", "text", "corr_id")
|
||||
>>> json_data = _deserialize_data(b'{"key": "value"}', "dictionary", "corr_id")
|
||||
>>> table_data = _deserialize_data(b'[{"id": 1}]', "table", "corr_id")
|
||||
"""
|
||||
if payload_type == "text":
|
||||
return data_bytes.decode('utf-8')
|
||||
|
||||
elif payload_type == "dictionary":
|
||||
json_str = data_bytes.decode('utf-8')
|
||||
return json.loads(json_str)
|
||||
|
||||
elif payload_type == "table":
|
||||
# Deserialize table data (JSON format)
|
||||
json_str = data_bytes.decode('utf-8')
|
||||
table_data = json.loads(json_str)
|
||||
# If pandas is available, try to convert to DataFrame
|
||||
try:
|
||||
import pandas as pd
|
||||
return pd.DataFrame(table_data)
|
||||
except ImportError:
|
||||
return table_data
|
||||
|
||||
elif payload_type in ("image", "audio", "video", "binary"):
|
||||
return data_bytes
|
||||
|
||||
else:
|
||||
raise ValueError("Unknown payload_type: {}".format(payload_type))
|
||||
|
||||
|
||||
class NATSConnection:
|
||||
"""Simple NATS connection for Python and Micropython."""
|
||||
|
||||
def __init__(self, url=DEFAULT_BROKER_URL):
|
||||
"""Initialize NATS connection.
|
||||
|
||||
Args:
|
||||
url: NATS server URL (e.g., "nats://localhost:4222")
|
||||
"""
|
||||
self.url = url
|
||||
self.host = "localhost"
|
||||
self.port = 4222
|
||||
self.conn = None
|
||||
self._parse_url(url)
|
||||
|
||||
def _parse_url(self, url):
|
||||
"""Parse NATS URL to extract host and port."""
|
||||
if url.startswith("nats://"):
|
||||
url = url[7:]
|
||||
elif url.startswith("tls://"):
|
||||
url = url[6:]
|
||||
|
||||
if ":" in url:
|
||||
self.host, port_str = url.split(":")
|
||||
self.port = int(port_str)
|
||||
else:
|
||||
self.host = url
|
||||
|
||||
def connect(self):
|
||||
"""Connect to NATS server."""
|
||||
# Use socket for both Python and Micropython
|
||||
try:
|
||||
import socket
|
||||
addr = socket.getaddrinfo(self.host, self.port)[0][-1]
|
||||
self.conn = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
|
||||
self.conn.connect(addr)
|
||||
except NameError:
|
||||
# Micropython fallback
|
||||
import usocket
|
||||
addr = usocket.getaddrinfo(self.host, self.port)[0][-1]
|
||||
self.conn = usocket.socket()
|
||||
self.conn.connect(addr)
|
||||
|
||||
log_trace("", "Connected to NATS server at {}:{}".format(self.host, self.port))
|
||||
|
||||
def publish(self, subject, message):
|
||||
"""Publish a message to a NATS subject.
|
||||
|
||||
Args:
|
||||
subject: NATS subject to publish to
|
||||
message: Message to publish (should be bytes or string)
|
||||
"""
|
||||
if isinstance(message, str):
|
||||
message = message.encode('utf-8')
|
||||
|
||||
# Simple NATS protocol implementation
|
||||
msg = "PUB {} {}\r\n".format(subject, len(message))
|
||||
msg = msg.encode('utf-8') + message + b"\r\n"
|
||||
|
||||
try:
|
||||
import socket
|
||||
self.conn.send(msg)
|
||||
except NameError:
|
||||
# Micropython fallback
|
||||
import usocket
|
||||
self.conn.send(msg)
|
||||
|
||||
log_trace("", "Message published to {}".format(subject))
|
||||
|
||||
def subscribe(self, subject, callback):
|
||||
"""Subscribe to a NATS subject.
|
||||
|
||||
Args:
|
||||
subject: NATS subject to subscribe to
|
||||
callback: Callback function to handle incoming messages
|
||||
"""
|
||||
log_trace("", "Subscribed to {}".format(subject))
|
||||
# Simplified subscription - in a real implementation, you'd handle SUB/PUB messages
|
||||
# For Micropython, we'll use a simple polling approach
|
||||
self.subscribed_subject = subject
|
||||
self.subscription_callback = callback
|
||||
|
||||
def wait_message(self, timeout=1000):
|
||||
"""Wait for incoming message.
|
||||
|
||||
Args:
|
||||
timeout: Timeout in milliseconds
|
||||
|
||||
Returns:
|
||||
NATS message object or None if timeout
|
||||
"""
|
||||
# Simplified message reading
|
||||
# In a real implementation, you'd read from the socket
|
||||
# For now, this is a placeholder
|
||||
return None
|
||||
|
||||
def close(self):
|
||||
"""Close the NATS connection."""
|
||||
if self.conn:
|
||||
self.conn.close()
|
||||
self.conn = None
|
||||
log_trace("", "NATS connection closed")
|
||||
|
||||
|
||||
def _fetch_with_backoff(url, max_retries=5, base_delay=100, max_delay=5000, correlation_id=""):
|
||||
"""Fetch data from URL with exponential backoff.
|
||||
|
||||
This function retrieves data from a URL with retry logic using
|
||||
exponential backoff to handle transient failures.
|
||||
|
||||
Args:
|
||||
url: URL to fetch from
|
||||
max_retries: Maximum number of retry attempts (default: 5)
|
||||
base_delay: Initial delay in milliseconds (default: 100)
|
||||
max_delay: Maximum delay in milliseconds (default: 5000)
|
||||
correlation_id: Correlation ID for logging
|
||||
|
||||
Returns:
|
||||
bytes: Fetched data
|
||||
|
||||
Raises:
|
||||
Exception: If all retry attempts fail
|
||||
|
||||
Example:
|
||||
>>> data = _fetch_with_backoff("http://example.com/file.zip", 5, 100, 5000, "corr_id")
|
||||
"""
|
||||
delay = base_delay
|
||||
for attempt in range(1, max_retries + 1):
|
||||
try:
|
||||
# Simple HTTP GET request
|
||||
# Try urequests for Micropython first, then requests for Python
|
||||
try:
|
||||
import urequests
|
||||
response = urequests.get(url)
|
||||
status_code = response.status_code
|
||||
content = response.content
|
||||
except ImportError:
|
||||
try:
|
||||
import requests
|
||||
response = requests.get(url)
|
||||
response.raise_for_status()
|
||||
status_code = response.status_code
|
||||
content = response.content
|
||||
except ImportError:
|
||||
raise Exception("No HTTP library available (urequests or requests)")
|
||||
|
||||
if status_code == 200:
|
||||
log_trace(correlation_id, "Successfully fetched data from {} on attempt {}".format(url, attempt))
|
||||
return content
|
||||
else:
|
||||
raise Exception("Failed to fetch: {}".format(status_code))
|
||||
except Exception as e:
|
||||
log_trace(correlation_id, "Attempt {} failed: {}".format(attempt, str(e)))
|
||||
if attempt < max_retries:
|
||||
time.sleep(delay / 1000.0)
|
||||
delay = min(delay * 2, max_delay)
|
||||
|
||||
raise Exception("Failed to fetch data after {} attempts".format(max_retries))
|
||||
|
||||
|
||||
def plik_oneshot_upload(fileserver_url, dataname, data):
|
||||
"""Upload a single file to a plik server using one-shot mode.
|
||||
|
||||
This function uploads raw byte data to a plik server in one-shot mode (no upload session).
|
||||
It first creates a one-shot upload session by sending a POST request with {"OneShot": true},
|
||||
retrieves an upload ID and token, then uploads the file data as multipart form data using the token.
|
||||
|
||||
Args:
|
||||
fileserver_url: Base URL of the plik server (e.g., "http://localhost:8080")
|
||||
dataname: Name of the file being uploaded
|
||||
data: Raw byte data of the file content
|
||||
|
||||
Returns:
|
||||
dict: Dictionary with keys:
|
||||
- "status": HTTP server response status
|
||||
- "uploadid": ID of the one-shot upload session
|
||||
- "fileid": ID of the uploaded file within the session
|
||||
- "url": Full URL to download the uploaded file
|
||||
|
||||
Example:
|
||||
>>> result = plik_oneshot_upload("http://localhost:8080", "test.txt", b"hello world")
|
||||
>>> result["status"], result["uploadid"], result["fileid"], result["url"]
|
||||
"""
|
||||
import json
|
||||
|
||||
try:
|
||||
import urequests
|
||||
except ImportError:
|
||||
import requests as urequests
|
||||
|
||||
# Get upload ID
|
||||
url_get_upload_id = "{}/upload".format(fileserver_url)
|
||||
headers = {"Content-Type": "application/json"}
|
||||
body = json.dumps({"OneShot": True})
|
||||
|
||||
response = urequests.post(url_get_upload_id, headers=headers, data=body)
|
||||
response_json = json.loads(response.text if hasattr(response, 'text') else response.content)
|
||||
|
||||
uploadid = response_json.get("id")
|
||||
uploadtoken = response_json.get("uploadToken")
|
||||
|
||||
# Upload file
|
||||
url_upload = "{}/file/{}".format(fileserver_url, uploadid)
|
||||
headers = {"X-UploadToken": uploadtoken}
|
||||
|
||||
# For Micropython, we need to construct the multipart form data manually
|
||||
# This is a simplified approach
|
||||
boundary = "----WebKitFormBoundary{}".format(uuid.uuid4().hex[:16])
|
||||
|
||||
# Create multipart body
|
||||
part1 = "--{}\r\n".format(boundary)
|
||||
part1 += "Content-Disposition: form-data; name=\"file\"; filename=\"{}\"\r\n".format(dataname)
|
||||
part1 += "Content-Type: application/octet-stream\r\n\r\n"
|
||||
part1_bytes = part1.encode('utf-8')
|
||||
|
||||
part2 = "\r\n--{}--".format(boundary)
|
||||
part2_bytes = part2.encode('utf-8')
|
||||
|
||||
# Combine all parts
|
||||
full_body = part1_bytes + data + part2_bytes
|
||||
|
||||
# Set content type with boundary
|
||||
content_type = "multipart/form-data; boundary={}".format(boundary)
|
||||
|
||||
response = urequests.post(url_upload, headers={"Content-Type": content_type}, data=full_body)
|
||||
response_json = json.loads(response.text if hasattr(response, 'text') else response.content)
|
||||
|
||||
fileid = response_json.get("id")
|
||||
url = "{}/file/{}/{}".format(fileserver_url, uploadid, dataname)
|
||||
|
||||
return {
|
||||
"status": response.status_code,
|
||||
"uploadid": uploadid,
|
||||
"fileid": fileid,
|
||||
"url": url
|
||||
}
|
||||
|
||||
|
||||
def smartsend(subject, data, broker_url=DEFAULT_BROKER_URL, fileserver_url=DEFAULT_FILESERVER_URL,
|
||||
fileserver_upload_handler=plik_oneshot_upload, size_threshold=DEFAULT_SIZE_THRESHOLD,
|
||||
correlation_id=None, msg_purpose="chat", sender_name="NATSBridge",
|
||||
receiver_name="", receiver_id="", reply_to="", reply_to_msg_id="", is_publish=True):
|
||||
"""Send data either directly via NATS or via a fileserver URL, depending on payload size.
|
||||
|
||||
This function intelligently routes data delivery based on payload size relative to a threshold.
|
||||
If the serialized payload is smaller than `size_threshold`, it encodes the data as Base64 and
|
||||
publishes directly over NATS. Otherwise, it uploads the data to a fileserver and publishes
|
||||
only the download URL over NATS.
|
||||
|
||||
Args:
|
||||
subject: NATS subject to publish the message to
|
||||
data: List of (dataname, data, payload_type) tuples to send
|
||||
- dataname: Name of the payload
|
||||
- data: The actual data to send
|
||||
- payload_type: Payload type ("text", "dictionary", "table", "image", "audio", "video", "binary")
|
||||
broker_url: URL of the NATS server
|
||||
fileserver_url: URL of the HTTP file server
|
||||
fileserver_upload_handler: Function to handle fileserver uploads (must return dict with "status", "uploadid", "fileid", "url" keys)
|
||||
size_threshold: Threshold in bytes separating direct vs link transport (default: 1MB)
|
||||
correlation_id: Optional correlation ID for tracing; if None, a UUID is generated
|
||||
msg_purpose: Purpose of the message ("ACK", "NACK", "updateStatus", "shutdown", "chat", etc.)
|
||||
sender_name: Name of the sender
|
||||
receiver_name: Name of the receiver (empty string means broadcast)
|
||||
receiver_id: UUID of the receiver (empty string means broadcast)
|
||||
reply_to: Topic to reply to (empty string if no reply expected)
|
||||
reply_to_msg_id: Message ID this message is replying to
|
||||
is_publish: Whether to automatically publish the message to NATS (default: True)
|
||||
- When True: message is published to NATS
|
||||
- When False: returns envelope and JSON string without publishing
|
||||
|
||||
Returns:
|
||||
tuple: (env, env_json_str) where:
|
||||
- env: MessageEnvelope object with all metadata and payloads
|
||||
- env_json_str: JSON string representation of the envelope for publishing
|
||||
|
||||
Example:
|
||||
>>> data = [("message", "Hello World!", "text")]
|
||||
>>> env, env_json_str = smartsend("/test", data)
|
||||
>>> # env: MessageEnvelope with all metadata and payloads
|
||||
>>> # env_json_str: JSON string for publishing
|
||||
"""
|
||||
# Generate correlation ID if not provided
|
||||
cid = correlation_id if correlation_id is not None else str(uuid.uuid4())
|
||||
|
||||
log_trace(cid, "Starting smartsend for subject: {}".format(subject))
|
||||
|
||||
# Generate message metadata
|
||||
msg_id = str(uuid.uuid4())
|
||||
|
||||
# Process each payload in the list
|
||||
payloads = []
|
||||
|
||||
for dataname, payload_data, payload_type in data:
|
||||
# Serialize data based on type
|
||||
payload_bytes = _serialize_data(payload_data, payload_type)
|
||||
|
||||
payload_size = len(payload_bytes)
|
||||
log_trace(cid, "Serialized payload '{}' (payload_type: {}) size: {} bytes".format(
|
||||
dataname, payload_type, payload_size))
|
||||
|
||||
# Decision: Direct vs Link
|
||||
if payload_size < size_threshold:
|
||||
# Direct path - Base64 encode and send via NATS
|
||||
# Convert to base64 string for JSON
|
||||
try:
|
||||
import ubinascii
|
||||
payload_b64_str = ubinascii.b2a_base64(payload_bytes).decode('utf-8').strip()
|
||||
except ImportError:
|
||||
import base64
|
||||
payload_b64_str = base64.b64encode(payload_bytes).decode('utf-8')
|
||||
|
||||
log_trace(cid, "Using direct transport for {} bytes".format(payload_size))
|
||||
|
||||
# Create MessagePayload for direct transport
|
||||
payload = MessagePayload(
|
||||
payload_b64_str,
|
||||
payload_type,
|
||||
id=str(uuid.uuid4()),
|
||||
dataname=dataname,
|
||||
transport="direct",
|
||||
encoding="base64",
|
||||
size=payload_size,
|
||||
metadata={"payload_bytes": payload_size}
|
||||
)
|
||||
payloads.append(payload)
|
||||
else:
|
||||
# Link path - Upload to HTTP server, send URL via NATS
|
||||
log_trace(cid, "Using link transport, uploading to fileserver")
|
||||
|
||||
# Upload to HTTP server
|
||||
response = fileserver_upload_handler(fileserver_url, dataname, payload_bytes)
|
||||
|
||||
if response.get("status") != 200:
|
||||
raise Exception("Failed to upload data to fileserver: {}".format(response.get("status")))
|
||||
|
||||
url = response.get("url")
|
||||
log_trace(cid, "Uploaded to URL: {}".format(url))
|
||||
|
||||
# Create MessagePayload for link transport
|
||||
payload = MessagePayload(
|
||||
url,
|
||||
payload_type,
|
||||
id=str(uuid.uuid4()),
|
||||
dataname=dataname,
|
||||
transport="link",
|
||||
encoding="none",
|
||||
size=payload_size,
|
||||
metadata={}
|
||||
)
|
||||
payloads.append(payload)
|
||||
|
||||
# Create MessageEnvelope with all payloads
|
||||
env = MessageEnvelope(
|
||||
subject,
|
||||
payloads,
|
||||
correlation_id=cid,
|
||||
msg_id=msg_id,
|
||||
msg_purpose=msg_purpose,
|
||||
sender_name=sender_name,
|
||||
sender_id=str(uuid.uuid4()),
|
||||
receiver_name=receiver_name,
|
||||
receiver_id=receiver_id,
|
||||
reply_to=reply_to,
|
||||
reply_to_msg_id=reply_to_msg_id,
|
||||
broker_url=broker_url,
|
||||
metadata={}
|
||||
)
|
||||
|
||||
msg_json = env.to_json()
|
||||
|
||||
# Publish to NATS if is_publish is True
|
||||
if is_publish:
|
||||
nats_conn = NATSConnection(broker_url)
|
||||
nats_conn.connect()
|
||||
nats_conn.publish(subject, msg_json)
|
||||
nats_conn.close()
|
||||
|
||||
# Return tuple of (envelope, json_string) for both direct and link transport
|
||||
return (env, msg_json)
|
||||
|
||||
|
||||
def smartreceive(msg, fileserver_download_handler=_fetch_with_backoff, max_retries=5,
|
||||
base_delay=100, max_delay=5000):
|
||||
"""Receive and process messages from NATS.
|
||||
|
||||
This function processes incoming NATS messages, handling both direct transport
|
||||
(base64 decoded payloads) and link transport (URL-based payloads).
|
||||
|
||||
Args:
|
||||
msg: NATS message to process (dict or JSON string with envelope data)
|
||||
fileserver_download_handler: Function to handle downloading data from file server URLs
|
||||
Receives: (url, max_retries, base_delay, max_delay, correlation_id)
|
||||
Returns: bytes (the downloaded data)
|
||||
max_retries: Maximum retry attempts for fetching URL (default: 5)
|
||||
base_delay: Initial delay for exponential backoff in ms (default: 100)
|
||||
max_delay: Maximum delay for exponential backoff in ms (default: 5000)
|
||||
|
||||
Returns:
|
||||
dict: Envelope dictionary with metadata and 'payloads' field containing list of
|
||||
(dataname, data, payload_type) tuples
|
||||
|
||||
Example:
|
||||
>>> env = smartreceive(msg)
|
||||
>>> # env contains envelope metadata and payloads field
|
||||
>>> # env["payloads"] = [(dataname1, data1, payload_type1), ...]
|
||||
>>> for dataname, data, payload_type in env["payloads"]:
|
||||
... print("Received {} of type {}: {}".format(dataname, payload_type, data))
|
||||
"""
|
||||
# Parse the JSON envelope
|
||||
json_data = msg if isinstance(msg, dict) else json.loads(msg)
|
||||
correlation_id = json_data.get("correlation_id", "")
|
||||
log_trace(correlation_id, "Processing received message")
|
||||
|
||||
# Process all payloads in the envelope
|
||||
payloads_list = []
|
||||
|
||||
# Get number of payloads
|
||||
num_payloads = len(json_data.get("payloads", []))
|
||||
|
||||
for i in range(num_payloads):
|
||||
payload = json_data["payloads"][i]
|
||||
transport = payload.get("transport", "")
|
||||
dataname = payload.get("dataname", "")
|
||||
|
||||
if transport == "direct":
|
||||
log_trace(correlation_id,
|
||||
"Direct transport - decoding payload '{}'".format(dataname))
|
||||
|
||||
# Extract base64 payload from the payload
|
||||
payload_b64 = payload.get("data", "")
|
||||
|
||||
# Decode Base64 payload
|
||||
try:
|
||||
import ubinascii
|
||||
payload_bytes = ubinascii.a2b_base64(payload_b64.encode('utf-8'))
|
||||
except ImportError:
|
||||
import base64
|
||||
payload_bytes = base64.b64decode(payload_b64)
|
||||
|
||||
# Deserialize based on type
|
||||
payload_type = payload.get("payload_type", "")
|
||||
data = _deserialize_data(payload_bytes, payload_type, correlation_id)
|
||||
|
||||
payloads_list.append((dataname, data, payload_type))
|
||||
|
||||
elif transport == "link":
|
||||
# Extract download URL from the payload
|
||||
url = payload.get("data", "")
|
||||
log_trace(correlation_id,
|
||||
"Link transport - fetching '{}' from URL: {}".format(dataname, url))
|
||||
|
||||
# Fetch with exponential backoff
|
||||
downloaded_data = fileserver_download_handler(
|
||||
url, max_retries, base_delay, max_delay, correlation_id
|
||||
)
|
||||
|
||||
# Deserialize based on type
|
||||
payload_type = payload.get("payload_type", "")
|
||||
data = _deserialize_data(downloaded_data, payload_type, correlation_id)
|
||||
|
||||
payloads_list.append((dataname, data, payload_type))
|
||||
|
||||
else:
|
||||
raise ValueError("Unknown transport type for payload '{}': {}".format(dataname, transport))
|
||||
|
||||
# Replace payloads field with the processed list of (dataname, data, payload_type) tuples
|
||||
json_data["payloads"] = payloads_list
|
||||
|
||||
return json_data
|
||||
|
||||
|
||||
# Utility functions
|
||||
def generate_uuid():
|
||||
"""Generate a UUID string."""
|
||||
return str(uuid.uuid4())
|
||||
|
||||
|
||||
def get_timestamp():
|
||||
"""Get current timestamp in ISO format."""
|
||||
return time.strftime("%Y-%m-%dT%H:%M:%S", time.localtime())
|
||||
|
||||
|
||||
# Example usage
|
||||
if __name__ == "__main__":
|
||||
print("NATSBridge - Bi-Directional Data Bridge")
|
||||
print("=======================================")
|
||||
print("This module provides:")
|
||||
print(" - MessageEnvelope: Message envelope structure with snake_case fields")
|
||||
print(" - MessagePayload: Payload structure with payload_type field")
|
||||
print(" - smartsend: Send data via NATS with automatic transport selection")
|
||||
print(" - smartreceive: Receive and process messages from NATS")
|
||||
print(" - plik_oneshot_upload: Upload files to HTTP file server")
|
||||
print(" - _fetch_with_backoff: Fetch data from URLs with retry logic")
|
||||
print()
|
||||
print("Usage:")
|
||||
print(" from nats_bridge import smartsend, smartreceive")
|
||||
print()
|
||||
print(" # Send data (list of (dataname, data, payload_type) tuples)")
|
||||
print(" data = [(\"message\", \"Hello World!\", \"text\")]")
|
||||
print(" env, env_json_str = smartsend(\"my.subject\", data)")
|
||||
print()
|
||||
print(" # On receiver:")
|
||||
print(" env = smartreceive(msg)")
|
||||
print(" for dataname, data, payload_type in env[\"payloads\"]:")
|
||||
print(" print(\"Received {} of type {}: {}\".format(dataname, payload_type, data))")
|
||||
@@ -1,67 +0,0 @@
|
||||
#!/usr/bin/env julia
|
||||
# Scenario 1: Command & Control (Small JSON)
|
||||
# Tests small JSON payloads (< 1MB) sent directly via NATS
|
||||
|
||||
using NATS
|
||||
using JSON3
|
||||
using UUIDs
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/julia_bridge.jl")
|
||||
using .BiDirectionalBridge
|
||||
|
||||
# Configuration
|
||||
const CONTROL_SUBJECT = "control"
|
||||
const RESPONSE_SUBJECT = "control_response"
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
|
||||
# Create correlation ID for tracing
|
||||
correlation_id = string(uuid4())
|
||||
|
||||
# Receiver: Listen for control commands
|
||||
function start_control_listener()
|
||||
conn = NATS.Connection(NATS_URL)
|
||||
try
|
||||
NATS.subscribe(conn, CONTROL_SUBJECT) do msg
|
||||
log_trace(msg.data)
|
||||
|
||||
# Parse the envelope
|
||||
env = MessageEnvelope(String(msg.data))
|
||||
|
||||
# Parse JSON payload
|
||||
config = JSON3.read(env.payload)
|
||||
|
||||
# Execute simulation with parameters
|
||||
step_size = config.step_size
|
||||
iterations = config.iterations
|
||||
|
||||
# Simulate processing
|
||||
sleep(0.1) # Simulate some work
|
||||
|
||||
# Send acknowledgment
|
||||
response = Dict(
|
||||
"status" => "Running",
|
||||
"correlation_id" => env.correlation_id,
|
||||
"step_size" => step_size,
|
||||
"iterations" => iterations
|
||||
)
|
||||
|
||||
NATS.publish(conn, RESPONSE_SUBJECT, JSON3.stringify(response))
|
||||
log_trace("Sent response: $(JSON3.stringify(response))")
|
||||
end
|
||||
|
||||
# Keep listening for 5 seconds
|
||||
sleep(5)
|
||||
finally
|
||||
NATS.close(conn)
|
||||
end
|
||||
end
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] [Correlation: $correlation_id] $message")
|
||||
end
|
||||
|
||||
# Run the listener
|
||||
start_control_listener()
|
||||
@@ -1,34 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
// Scenario 1: Command & Control (Small JSON)
|
||||
// Tests small JSON payloads (< 1MB) sent directly via NATS
|
||||
|
||||
const { SmartSend } = require('../js_bridge');
|
||||
|
||||
// Configuration
|
||||
const CONTROL_SUBJECT = "control";
|
||||
const NATS_URL = "nats://localhost:4222";
|
||||
|
||||
// Create correlation ID for tracing
|
||||
const correlationId = require('uuid').v4();
|
||||
|
||||
// Sender: Send control command to Julia
|
||||
async function sendControlCommand() {
|
||||
const config = {
|
||||
step_size: 0.01,
|
||||
iterations: 1000
|
||||
};
|
||||
|
||||
// Send via SmartSend with type="json"
|
||||
const env = await SmartSend(
|
||||
CONTROL_SUBJECT,
|
||||
config,
|
||||
"json",
|
||||
{ correlationId }
|
||||
);
|
||||
|
||||
console.log(`Sent control command with correlation_id: ${correlationId}`);
|
||||
console.log(`Envelope: ${JSON.stringify(env, null, 2)}`);
|
||||
}
|
||||
|
||||
// Run the sender
|
||||
sendControlCommand().catch(console.error);
|
||||
@@ -1,66 +0,0 @@
|
||||
#!/usr/bin/env julia
|
||||
# Scenario 2: Deep Dive Analysis (Large Arrow Table)
|
||||
# Tests large Arrow tables (> 1MB) sent via HTTP fileserver
|
||||
|
||||
using NATS
|
||||
using Arrow
|
||||
using DataFrames
|
||||
using JSON3
|
||||
using UUIDs
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/julia_bridge.jl")
|
||||
using .BiDirectionalBridge
|
||||
|
||||
# Configuration
|
||||
const ANALYSIS_SUBJECT = "analysis_results"
|
||||
const RESPONSE_SUBJECT = "analysis_response"
|
||||
const NATS_URL = "nats://localhost:4222"
|
||||
|
||||
# Create correlation ID for tracing
|
||||
correlation_id = string(uuid4())
|
||||
|
||||
# Receiver: Listen for analysis results
|
||||
function start_analysis_listener()
|
||||
conn = NATS.Connection(NATS_URL)
|
||||
try
|
||||
NATS.subscribe(conn, ANALYSIS_SUBJECT) do msg
|
||||
log_trace("Received message from $(msg.subject)")
|
||||
|
||||
# Parse the envelope
|
||||
env = MessageEnvelope(String(msg.data))
|
||||
|
||||
# Use SmartReceive to handle the data
|
||||
result = SmartReceive(msg)
|
||||
|
||||
# Process the data based on type
|
||||
if result.envelope.type == "table"
|
||||
df = result.data
|
||||
log_trace("Received DataFrame with $(nrows(df)) rows")
|
||||
log_trace("DataFrame columns: $(names(df))")
|
||||
|
||||
# Send acknowledgment
|
||||
response = Dict(
|
||||
"status" => "Processed",
|
||||
"correlation_id" => env.correlation_id,
|
||||
"row_count" => nrows(df)
|
||||
)
|
||||
NATS.publish(conn, RESPONSE_SUBJECT, JSON3.stringify(response))
|
||||
end
|
||||
end
|
||||
|
||||
# Keep listening for 10 seconds
|
||||
sleep(10)
|
||||
finally
|
||||
NATS.close(conn)
|
||||
end
|
||||
end
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] [Correlation: $correlation_id] $message")
|
||||
end
|
||||
|
||||
# Run the listener
|
||||
start_analysis_listener()
|
||||
@@ -1,54 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
// Scenario 2: Deep Dive Analysis (Large Arrow Table)
|
||||
// Tests large Arrow tables (> 1MB) sent via HTTP fileserver
|
||||
|
||||
const { SmartSend } = require('../js_bridge');
|
||||
|
||||
// Configuration
|
||||
const ANALYSIS_SUBJECT = "analysis_results";
|
||||
const NATS_URL = "nats://localhost:4222";
|
||||
|
||||
// Create correlation ID for tracing
|
||||
const correlationId = require('uuid').v4();
|
||||
|
||||
// Sender: Send large Arrow table to Julia
|
||||
async function sendLargeTable() {
|
||||
// Create a large DataFrame-like structure (10 million rows)
|
||||
// For testing, we'll create a smaller but still large table
|
||||
const numRows = 1000000; // 1 million rows
|
||||
|
||||
const data = {
|
||||
id: Array.from({ length: numRows }, (_, i) => i + 1),
|
||||
value: Array.from({ length: numRows }, () => Math.random()),
|
||||
category: Array.from({ length: numRows }, () => ['A', 'B', 'C'][Math.floor(Math.random() * 3)])
|
||||
};
|
||||
|
||||
// Convert to Arrow Table
|
||||
const { Table, Vector, RecordBatch } = require('apache-arrow');
|
||||
|
||||
const idVector = Vector.from(data.id);
|
||||
const valueVector = Vector.from(data.value);
|
||||
const categoryVector = Vector.from(data.category);
|
||||
|
||||
const table = Table.from({
|
||||
id: idVector,
|
||||
value: valueVector,
|
||||
category: categoryVector
|
||||
});
|
||||
|
||||
// Send via SmartSend with type="table"
|
||||
const env = await SmartSend(
|
||||
ANALYSIS_SUBJECT,
|
||||
table,
|
||||
"table",
|
||||
{ correlationId }
|
||||
);
|
||||
|
||||
console.log(`Sent large table with ${numRows} rows`);
|
||||
console.log(`Correlation ID: ${correlationId}`);
|
||||
console.log(`Transport: ${env.transport}`);
|
||||
console.log(`URL: ${env.url || 'N/A'}`);
|
||||
}
|
||||
|
||||
// Run the sender
|
||||
sendLargeTable().catch(console.error);
|
||||
80
test/test_js_dict_receiver.js
Normal file
80
test/test_js_dict_receiver.js
Normal file
@@ -0,0 +1,80 @@
|
||||
#!/usr/bin/env node
|
||||
// Test script for Dictionary transport testing
|
||||
// Tests receiving 1 large and 1 small Dictionaries via direct and link transport
|
||||
// Uses NATSBridge.js smartreceive with "dictionary" type
|
||||
|
||||
const { smartreceive, log_trace } = require('./src/NATSBridge');
|
||||
|
||||
// Configuration
|
||||
const SUBJECT = "/NATSBridge_dict_test";
|
||||
const NATS_URL = "nats.yiem.cc";
|
||||
|
||||
// Helper: Log with correlation ID
|
||||
function log_trace(message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] ${message}`);
|
||||
}
|
||||
|
||||
// Receiver: Listen for messages and verify Dictionary handling
|
||||
async function test_dict_receive() {
|
||||
// Connect to NATS
|
||||
const { connect } = require('nats');
|
||||
const nc = await connect({ servers: [NATS_URL] });
|
||||
|
||||
// Subscribe to the subject
|
||||
const sub = nc.subscribe(SUBJECT);
|
||||
|
||||
for await (const msg of sub) {
|
||||
log_trace(`Received message on ${msg.subject}`);
|
||||
|
||||
// Use NATSBridge.smartreceive to handle the data
|
||||
const result = await smartreceive(
|
||||
msg,
|
||||
{
|
||||
maxRetries: 5,
|
||||
baseDelay: 100,
|
||||
maxDelay: 5000
|
||||
}
|
||||
);
|
||||
|
||||
// Result is an envelope dictionary with payloads field
|
||||
// Access payloads with result.payloads
|
||||
for (const { dataname, data, type } of result.payloads) {
|
||||
if (typeof data === 'object' && data !== null && !Array.isArray(data)) {
|
||||
log_trace(`Received Dictionary '${dataname}' of type ${type}`);
|
||||
|
||||
// Display dictionary contents
|
||||
console.log(" Contents:");
|
||||
for (const [key, value] of Object.entries(data)) {
|
||||
console.log(` ${key} => ${value}`);
|
||||
}
|
||||
|
||||
// Save to JSON file
|
||||
const fs = require('fs');
|
||||
const output_path = `./received_${dataname}.json`;
|
||||
const json_str = JSON.stringify(data, null, 2);
|
||||
fs.writeFileSync(output_path, json_str);
|
||||
log_trace(`Saved Dictionary to ${output_path}`);
|
||||
} else {
|
||||
log_trace(`Received unexpected data type for '${dataname}': ${typeof data}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Keep listening for 10 seconds
|
||||
setTimeout(() => {
|
||||
nc.close();
|
||||
process.exit(0);
|
||||
}, 120000);
|
||||
}
|
||||
|
||||
// Run the test
|
||||
console.log("Starting Dictionary transport test...");
|
||||
console.log("Note: This receiver will wait for messages from the sender.");
|
||||
console.log("Run test_js_to_js_dict_sender.js first to send test data.");
|
||||
|
||||
// Run receiver
|
||||
console.log("testing smartreceive");
|
||||
test_dict_receive();
|
||||
|
||||
console.log("Test completed.");
|
||||
165
test/test_js_dict_sender.js
Normal file
165
test/test_js_dict_sender.js
Normal file
@@ -0,0 +1,165 @@
|
||||
#!/usr/bin/env node
|
||||
// Test script for Dictionary transport testing
|
||||
// Tests sending 1 large and 1 small Dictionaries via direct and link transport
|
||||
// Uses NATSBridge.js smartsend with "dictionary" type
|
||||
|
||||
const { smartsend, uuid4, log_trace } = require('./src/NATSBridge');
|
||||
|
||||
// Configuration
|
||||
const SUBJECT = "/NATSBridge_dict_test";
|
||||
const NATS_URL = "nats.yiem.cc";
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080";
|
||||
|
||||
// Create correlation ID for tracing
|
||||
const correlation_id = uuid4();
|
||||
|
||||
// Helper: Log with correlation ID
|
||||
function log_trace(message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] [Correlation: ${correlation_id}] ${message}`);
|
||||
}
|
||||
|
||||
// File upload handler for plik server
|
||||
async function plik_upload_handler(fileserver_url, dataname, data, correlation_id) {
|
||||
// Get upload ID
|
||||
const url_getUploadID = `${fileserver_url}/upload`;
|
||||
const headers = {
|
||||
"Content-Type": "application/json"
|
||||
};
|
||||
const body = JSON.stringify({ OneShot: true });
|
||||
|
||||
let response = await fetch(url_getUploadID, {
|
||||
method: "POST",
|
||||
headers: headers,
|
||||
body: body
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to get upload ID: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const responseJson = await response.json();
|
||||
const uploadid = responseJson.id;
|
||||
const uploadtoken = responseJson.uploadToken;
|
||||
|
||||
// Upload file
|
||||
const formData = new FormData();
|
||||
const blob = new Blob([data], { type: "application/octet-stream" });
|
||||
formData.append("file", blob, dataname);
|
||||
|
||||
response = await fetch(`${fileserver_url}/file/${uploadid}`, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"X-UploadToken": uploadtoken
|
||||
},
|
||||
body: formData
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to upload file: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const fileResponseJson = await response.json();
|
||||
const fileid = fileResponseJson.id;
|
||||
|
||||
const url = `${fileserver_url}/file/${uploadid}/${fileid}/${encodeURIComponent(dataname)}`;
|
||||
|
||||
return {
|
||||
status: response.status,
|
||||
uploadid: uploadid,
|
||||
fileid: fileid,
|
||||
url: url
|
||||
};
|
||||
}
|
||||
|
||||
// Sender: Send Dictionaries via smartsend
|
||||
async function test_dict_send() {
|
||||
// Create a small Dictionary (will use direct transport)
|
||||
const small_dict = {
|
||||
name: "Alice",
|
||||
age: 30,
|
||||
scores: [95, 88, 92],
|
||||
metadata: {
|
||||
height: 155,
|
||||
weight: 55
|
||||
}
|
||||
};
|
||||
|
||||
// Create a large Dictionary (will use link transport if > 1MB)
|
||||
const large_dict_ids = [];
|
||||
const large_dict_names = [];
|
||||
const large_dict_scores = [];
|
||||
const large_dict_categories = [];
|
||||
|
||||
for (let i = 0; i < 50000; i++) {
|
||||
large_dict_ids.push(i + 1);
|
||||
large_dict_names.push(`User_${i}`);
|
||||
large_dict_scores.push(Math.floor(Math.random() * 100) + 1);
|
||||
large_dict_categories.push(`Category_${Math.floor(Math.random() * 10) + 1}`);
|
||||
}
|
||||
|
||||
const large_dict = {
|
||||
ids: large_dict_ids,
|
||||
names: large_dict_names,
|
||||
scores: large_dict_scores,
|
||||
categories: large_dict_categories,
|
||||
metadata: {
|
||||
source: "test_generator",
|
||||
timestamp: new Date().toISOString()
|
||||
}
|
||||
};
|
||||
|
||||
// Test data 1: small Dictionary
|
||||
const data1 = { dataname: "small_dict", data: small_dict, type: "dictionary" };
|
||||
|
||||
// Test data 2: large Dictionary
|
||||
const data2 = { dataname: "large_dict", data: large_dict, type: "dictionary" };
|
||||
|
||||
// Use smartsend with dictionary type
|
||||
// For small Dictionary: will use direct transport (JSON encoded)
|
||||
// For large Dictionary: will use link transport (uploaded to fileserver)
|
||||
const { env, env_json_str } = await smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2],
|
||||
{
|
||||
natsUrl: NATS_URL,
|
||||
fileserverUrl: FILESERVER_URL,
|
||||
fileserverUploadHandler: plik_upload_handler,
|
||||
sizeThreshold: 1_000_000,
|
||||
correlationId: correlation_id,
|
||||
msgPurpose: "chat",
|
||||
senderName: "dict_sender",
|
||||
receiverName: "",
|
||||
receiverId: "",
|
||||
replyTo: "",
|
||||
replyToMsgId: "",
|
||||
isPublish: true // Publish the message to NATS
|
||||
}
|
||||
);
|
||||
|
||||
log_trace(`Sent message with ${env.payloads.length} payloads`);
|
||||
|
||||
// Log transport type for each payload
|
||||
for (let i = 0; i < env.payloads.length; i++) {
|
||||
const payload = env.payloads[i];
|
||||
log_trace(`Payload ${i + 1} ('${payload.dataname}'):`);
|
||||
log_trace(` Transport: ${payload.transport}`);
|
||||
log_trace(` Type: ${payload.type}`);
|
||||
log_trace(` Size: ${payload.size} bytes`);
|
||||
log_trace(` Encoding: ${payload.encoding}`);
|
||||
|
||||
if (payload.transport === "link") {
|
||||
log_trace(` URL: ${payload.data}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Run the test
|
||||
console.log("Starting Dictionary transport test...");
|
||||
console.log(`Correlation ID: ${correlation_id}`);
|
||||
|
||||
// Run sender
|
||||
console.log("start smartsend for dictionaries");
|
||||
test_dict_send();
|
||||
|
||||
console.log("Test completed.");
|
||||
71
test/test_js_file_receiver.js
Normal file
71
test/test_js_file_receiver.js
Normal file
@@ -0,0 +1,71 @@
|
||||
#!/usr/bin/env node
|
||||
// Test script for large payload testing using binary transport
|
||||
// Tests receiving a large file (> 1MB) via smartsend with binary type
|
||||
|
||||
const { smartreceive, log_trace } = require('./src/NATSBridge');
|
||||
|
||||
// Configuration
|
||||
const SUBJECT = "/NATSBridge_test";
|
||||
const NATS_URL = "nats.yiem.cc";
|
||||
|
||||
// Helper: Log with correlation ID
|
||||
function log_trace(message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] ${message}`);
|
||||
}
|
||||
|
||||
// Receiver: Listen for messages and verify large payload handling
|
||||
async function test_large_binary_receive() {
|
||||
// Connect to NATS
|
||||
const { connect } = require('nats');
|
||||
const nc = await connect({ servers: [NATS_URL] });
|
||||
|
||||
// Subscribe to the subject
|
||||
const sub = nc.subscribe(SUBJECT);
|
||||
|
||||
for await (const msg of sub) {
|
||||
log_trace(`Received message on ${msg.subject}`);
|
||||
|
||||
// Use NATSBridge.smartreceive to handle the data
|
||||
const result = await smartreceive(
|
||||
msg,
|
||||
{
|
||||
maxRetries: 5,
|
||||
baseDelay: 100,
|
||||
maxDelay: 5000
|
||||
}
|
||||
);
|
||||
|
||||
// Result is an envelope dictionary with payloads field
|
||||
// Access payloads with result.payloads
|
||||
for (const { dataname, data, type } of result.payloads) {
|
||||
if (data instanceof Uint8Array || Array.isArray(data)) {
|
||||
const file_size = data.length;
|
||||
log_trace(`Received ${file_size} bytes of binary data for '${dataname}' of type ${type}`);
|
||||
|
||||
// Save received data to a test file
|
||||
const fs = require('fs');
|
||||
const output_path = `./new_${dataname}`;
|
||||
fs.writeFileSync(output_path, Buffer.from(data));
|
||||
log_trace(`Saved received data to ${output_path}`);
|
||||
} else {
|
||||
log_trace(`Received unexpected data type for '${dataname}': ${typeof data}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Keep listening for 10 seconds
|
||||
setTimeout(() => {
|
||||
nc.close();
|
||||
process.exit(0);
|
||||
}, 120000);
|
||||
}
|
||||
|
||||
// Run the test
|
||||
console.log("Starting large binary payload test...");
|
||||
|
||||
// Run receiver
|
||||
console.log("testing smartreceive");
|
||||
test_large_binary_receive();
|
||||
|
||||
console.log("Test completed.");
|
||||
144
test/test_js_file_sender.js
Normal file
144
test/test_js_file_sender.js
Normal file
@@ -0,0 +1,144 @@
|
||||
#!/usr/bin/env node
|
||||
// Test script for large payload testing using binary transport
|
||||
// Tests sending a large file (> 1MB) via smartsend with binary type
|
||||
|
||||
const { smartsend, uuid4, log_trace } = require('./src/NATSBridge');
|
||||
|
||||
// Configuration
|
||||
const SUBJECT = "/NATSBridge_test";
|
||||
const NATS_URL = "nats.yiem.cc";
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080";
|
||||
|
||||
// Create correlation ID for tracing
|
||||
const correlation_id = uuid4();
|
||||
|
||||
// Helper: Log with correlation ID
|
||||
function log_trace(message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] [Correlation: ${correlation_id}] ${message}`);
|
||||
}
|
||||
|
||||
// File upload handler for plik server
|
||||
async function plik_upload_handler(fileserver_url, dataname, data, correlation_id) {
|
||||
log_trace(correlation_id, `Uploading ${dataname} to fileserver: ${fileserver_url}`);
|
||||
|
||||
// Step 1: Get upload ID and token
|
||||
const url_getUploadID = `${fileserver_url}/upload`;
|
||||
const headers = {
|
||||
"Content-Type": "application/json"
|
||||
};
|
||||
const body = JSON.stringify({ OneShot: true });
|
||||
|
||||
let response = await fetch(url_getUploadID, {
|
||||
method: "POST",
|
||||
headers: headers,
|
||||
body: body
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to get upload ID: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const responseJson = await response.json();
|
||||
const uploadid = responseJson.id;
|
||||
const uploadtoken = responseJson.uploadToken;
|
||||
|
||||
// Step 2: Upload file data
|
||||
const url_upload = `${fileserver_url}/file/${uploadid}`;
|
||||
|
||||
// Create multipart form data
|
||||
const formData = new FormData();
|
||||
const blob = new Blob([data], { type: "application/octet-stream" });
|
||||
formData.append("file", blob, dataname);
|
||||
|
||||
response = await fetch(url_upload, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"X-UploadToken": uploadtoken
|
||||
},
|
||||
body: formData
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to upload file: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const fileResponseJson = await response.json();
|
||||
const fileid = fileResponseJson.id;
|
||||
|
||||
// Build the download URL
|
||||
const url = `${fileserver_url}/file/${uploadid}/${fileid}/${encodeURIComponent(dataname)}`;
|
||||
|
||||
log_trace(correlation_id, `Uploaded to URL: ${url}`);
|
||||
|
||||
return {
|
||||
status: response.status,
|
||||
uploadid: uploadid,
|
||||
fileid: fileid,
|
||||
url: url
|
||||
};
|
||||
}
|
||||
|
||||
// Sender: Send large binary file via smartsend
|
||||
async function test_large_binary_send() {
|
||||
// Read the large file as binary data
|
||||
const fs = require('fs');
|
||||
|
||||
// Test data 1
|
||||
const file_path1 = './testFile_large.zip';
|
||||
const file_data1 = fs.readFileSync(file_path1);
|
||||
const filename1 = 'testFile_large.zip';
|
||||
const data1 = { dataname: filename1, data: file_data1, type: "binary" };
|
||||
|
||||
// Test data 2
|
||||
const file_path2 = './testFile_small.zip';
|
||||
const file_data2 = fs.readFileSync(file_path2);
|
||||
const filename2 = 'testFile_small.zip';
|
||||
const data2 = { dataname: filename2, data: file_data2, type: "binary" };
|
||||
|
||||
// Use smartsend with binary type - will automatically use link transport
|
||||
// if file size exceeds the threshold (1MB by default)
|
||||
const { env, env_json_str } = await smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2],
|
||||
{
|
||||
natsUrl: NATS_URL,
|
||||
fileserverUrl: FILESERVER_URL,
|
||||
fileserverUploadHandler: plik_upload_handler,
|
||||
sizeThreshold: 1_000_000,
|
||||
correlationId: correlation_id,
|
||||
msgPurpose: "chat",
|
||||
senderName: "sender",
|
||||
receiverName: "",
|
||||
receiverId: "",
|
||||
replyTo: "",
|
||||
replyToMsgId: "",
|
||||
isPublish: true // Publish the message to NATS
|
||||
}
|
||||
);
|
||||
|
||||
log_trace(`Sent message with transport: ${env.payloads[0].transport}`);
|
||||
log_trace(`Envelope type: ${env.payloads[0].type}`);
|
||||
|
||||
// Check if link transport was used
|
||||
if (env.payloads[0].transport === "link") {
|
||||
log_trace("Using link transport - file uploaded to HTTP server");
|
||||
log_trace(`URL: ${env.payloads[0].data}`);
|
||||
} else {
|
||||
log_trace("Using direct transport - payload sent via NATS");
|
||||
}
|
||||
}
|
||||
|
||||
// Run the test
|
||||
console.log("Starting large binary payload test...");
|
||||
console.log(`Correlation ID: ${correlation_id}`);
|
||||
|
||||
// Run sender first
|
||||
console.log("start smartsend");
|
||||
test_large_binary_send();
|
||||
|
||||
// Run receiver
|
||||
// console.log("testing smartreceive");
|
||||
// test_large_binary_receive();
|
||||
|
||||
console.log("Test completed.");
|
||||
277
test/test_js_mix_payload_sender.js
Normal file
277
test/test_js_mix_payload_sender.js
Normal file
@@ -0,0 +1,277 @@
|
||||
#!/usr/bin/env node
|
||||
// Test script for mixed-content message testing
|
||||
// Tests sending a mix of text, json, table, image, audio, video, and binary data
|
||||
// from JavaScript serviceA to JavaScript serviceB using NATSBridge.js smartsend
|
||||
//
|
||||
// This test demonstrates that any combination and any number of mixed content
|
||||
// can be sent and received correctly.
|
||||
|
||||
const { smartsend, uuid4, log_trace, _serialize_data } = require('./src/NATSBridge');
|
||||
|
||||
// Configuration
|
||||
const SUBJECT = "/NATSBridge_mix_test";
|
||||
const NATS_URL = "nats.yiem.cc";
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080";
|
||||
|
||||
// Create correlation ID for tracing
|
||||
const correlation_id = uuid4();
|
||||
|
||||
// Helper: Log with correlation ID
|
||||
function log_trace(message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] [Correlation: ${correlation_id}] ${message}`);
|
||||
}
|
||||
|
||||
// File upload handler for plik server
|
||||
async function plik_upload_handler(fileserver_url, dataname, data, correlation_id) {
|
||||
log_trace(correlation_id, `Uploading ${dataname} to fileserver: ${fileserver_url}`);
|
||||
|
||||
// Step 1: Get upload ID and token
|
||||
const url_getUploadID = `${fileserver_url}/upload`;
|
||||
const headers = {
|
||||
"Content-Type": "application/json"
|
||||
};
|
||||
const body = JSON.stringify({ OneShot: true });
|
||||
|
||||
let response = await fetch(url_getUploadID, {
|
||||
method: "POST",
|
||||
headers: headers,
|
||||
body: body
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to get upload ID: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const responseJson = await response.json();
|
||||
const uploadid = responseJson.id;
|
||||
const uploadtoken = responseJson.uploadToken;
|
||||
|
||||
// Step 2: Upload file data
|
||||
const url_upload = `${fileserver_url}/file/${uploadid}`;
|
||||
|
||||
// Create multipart form data
|
||||
const formData = new FormData();
|
||||
const blob = new Blob([data], { type: "application/octet-stream" });
|
||||
formData.append("file", blob, dataname);
|
||||
|
||||
response = await fetch(url_upload, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"X-UploadToken": uploadtoken
|
||||
},
|
||||
body: formData
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to upload file: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const fileResponseJson = await response.json();
|
||||
const fileid = fileResponseJson.id;
|
||||
|
||||
// Build the download URL
|
||||
const url = `${fileserver_url}/file/${uploadid}/${fileid}/${encodeURIComponent(dataname)}`;
|
||||
|
||||
log_trace(correlation_id, `Uploaded to URL: ${url}`);
|
||||
|
||||
return {
|
||||
status: response.status,
|
||||
uploadid: uploadid,
|
||||
fileid: fileid,
|
||||
url: url
|
||||
};
|
||||
}
|
||||
|
||||
// Helper: Create sample data for each type
|
||||
function create_sample_data() {
|
||||
// Text data (small - direct transport)
|
||||
const text_data = "Hello! This is a test chat message. 🎉\nHow are you doing today? 😊";
|
||||
|
||||
// Dictionary/JSON data (medium - could be direct or link)
|
||||
const dict_data = {
|
||||
type: "chat",
|
||||
sender: "serviceA",
|
||||
receiver: "serviceB",
|
||||
metadata: {
|
||||
timestamp: new Date().toISOString(),
|
||||
priority: "high",
|
||||
tags: ["urgent", "chat", "test"]
|
||||
},
|
||||
content: {
|
||||
text: "This is a JSON-formatted chat message with nested structure.",
|
||||
format: "markdown",
|
||||
mentions: ["user1", "user2"]
|
||||
}
|
||||
};
|
||||
|
||||
// Table data (small - direct transport) - NOT IMPLEMENTED (requires apache-arrow)
|
||||
// const table_data_small = {...};
|
||||
|
||||
// Table data (large - link transport) - NOT IMPLEMENTED (requires apache-arrow)
|
||||
// const table_data_large = {...};
|
||||
|
||||
// Image data (small binary - direct transport)
|
||||
// Create a simple 10x10 pixel PNG-like data
|
||||
const image_width = 10;
|
||||
const image_height = 10;
|
||||
let image_data = new Uint8Array(128); // PNG header + pixel data
|
||||
// PNG header
|
||||
image_data[0] = 0x89;
|
||||
image_data[1] = 0x50;
|
||||
image_data[2] = 0x4E;
|
||||
image_data[3] = 0x47;
|
||||
image_data[4] = 0x0D;
|
||||
image_data[5] = 0x0A;
|
||||
image_data[6] = 0x1A;
|
||||
image_data[7] = 0x0A;
|
||||
// Simple RGB data (10*10*3 = 300 bytes)
|
||||
for (let i = 0; i < 300; i++) {
|
||||
image_data[i + 8] = 0xFF; // Red pixel
|
||||
}
|
||||
|
||||
// Image data (large - link transport)
|
||||
const large_image_width = 500;
|
||||
const large_image_height = 1000;
|
||||
const large_image_data = new Uint8Array(large_image_width * large_image_height * 3 + 8);
|
||||
// PNG header
|
||||
large_image_data[0] = 0x89;
|
||||
large_image_data[1] = 0x50;
|
||||
large_image_data[2] = 0x4E;
|
||||
large_image_data[3] = 0x47;
|
||||
large_image_data[4] = 0x0D;
|
||||
large_image_data[5] = 0x0A;
|
||||
large_image_data[6] = 0x1A;
|
||||
large_image_data[7] = 0x0A;
|
||||
// Random RGB data
|
||||
for (let i = 0; i < large_image_width * large_image_height * 3; i++) {
|
||||
large_image_data[i + 8] = Math.floor(Math.random() * 255);
|
||||
}
|
||||
|
||||
// Audio data (small binary - direct transport)
|
||||
const audio_data = new Uint8Array(100);
|
||||
for (let i = 0; i < 100; i++) {
|
||||
audio_data[i] = Math.floor(Math.random() * 255);
|
||||
}
|
||||
|
||||
// Audio data (large - link transport)
|
||||
const large_audio_data = new Uint8Array(1_500_000);
|
||||
for (let i = 0; i < 1_500_000; i++) {
|
||||
large_audio_data[i] = Math.floor(Math.random() * 255);
|
||||
}
|
||||
|
||||
// Video data (small binary - direct transport)
|
||||
const video_data = new Uint8Array(150);
|
||||
for (let i = 0; i < 150; i++) {
|
||||
video_data[i] = Math.floor(Math.random() * 255);
|
||||
}
|
||||
|
||||
// Video data (large - link transport)
|
||||
const large_video_data = new Uint8Array(1_500_000);
|
||||
for (let i = 0; i < 1_500_000; i++) {
|
||||
large_video_data[i] = Math.floor(Math.random() * 255);
|
||||
}
|
||||
|
||||
// Binary data (small - direct transport)
|
||||
const binary_data = new Uint8Array(200);
|
||||
for (let i = 0; i < 200; i++) {
|
||||
binary_data[i] = Math.floor(Math.random() * 255);
|
||||
}
|
||||
|
||||
// Binary data (large - link transport)
|
||||
const large_binary_data = new Uint8Array(1_500_000);
|
||||
for (let i = 0; i < 1_500_000; i++) {
|
||||
large_binary_data[i] = Math.floor(Math.random() * 255);
|
||||
}
|
||||
|
||||
return {
|
||||
text_data,
|
||||
dict_data,
|
||||
// table_data_small,
|
||||
// table_data_large,
|
||||
image_data,
|
||||
large_image_data,
|
||||
audio_data,
|
||||
large_audio_data,
|
||||
video_data,
|
||||
large_video_data,
|
||||
binary_data,
|
||||
large_binary_data
|
||||
};
|
||||
}
|
||||
|
||||
// Sender: Send mixed content via smartsend
|
||||
async function test_mix_send() {
|
||||
// Create sample data
|
||||
const { text_data, dict_data, image_data, large_image_data, audio_data, large_audio_data, video_data, large_video_data, binary_data, large_binary_data } = create_sample_data();
|
||||
|
||||
// Create payloads list - mixed content with both small and large data
|
||||
// Small data uses direct transport, large data uses link transport
|
||||
const payloads = [
|
||||
// Small data (direct transport) - text, dictionary
|
||||
{ dataname: "chat_text", data: text_data, type: "text" },
|
||||
{ dataname: "chat_json", data: dict_data, type: "dictionary" },
|
||||
// { dataname: "chat_table_small", data: table_data_small, type: "table" },
|
||||
|
||||
// Large data (link transport) - large image, large audio, large video, large binary
|
||||
// { dataname: "chat_table_large", data: table_data_large, type: "table" },
|
||||
{ dataname: "user_image_large", data: large_image_data, type: "image" },
|
||||
{ dataname: "audio_clip_large", data: large_audio_data, type: "audio" },
|
||||
{ dataname: "video_clip_large", data: large_video_data, type: "video" },
|
||||
{ dataname: "binary_file_large", data: large_binary_data, type: "binary" }
|
||||
];
|
||||
|
||||
// Use smartsend with mixed content
|
||||
const { env, env_json_str } = await smartsend(
|
||||
SUBJECT,
|
||||
payloads,
|
||||
{
|
||||
natsUrl: NATS_URL,
|
||||
fileserverUrl: FILESERVER_URL,
|
||||
fileserverUploadHandler: plik_upload_handler,
|
||||
sizeThreshold: 1_000_000,
|
||||
correlationId: correlation_id,
|
||||
msgPurpose: "chat",
|
||||
senderName: "mix_sender",
|
||||
receiverName: "",
|
||||
receiverId: "",
|
||||
replyTo: "",
|
||||
replyToMsgId: "",
|
||||
isPublish: true // Publish the message to NATS
|
||||
}
|
||||
);
|
||||
|
||||
log_trace(`Sent message with ${env.payloads.length} payloads`);
|
||||
|
||||
// Log transport type for each payload
|
||||
for (let i = 0; i < env.payloads.length; i++) {
|
||||
const payload = env.payloads[i];
|
||||
log_trace(`Payload ${i + 1} ('${payload.dataname}'):`);
|
||||
log_trace(` Transport: ${payload.transport}`);
|
||||
log_trace(` Type: ${payload.type}`);
|
||||
log_trace(` Size: ${payload.size} bytes`);
|
||||
log_trace(` Encoding: ${payload.encoding}`);
|
||||
|
||||
if (payload.transport === "link") {
|
||||
log_trace(` URL: ${payload.data}`);
|
||||
}
|
||||
}
|
||||
|
||||
// Summary
|
||||
console.log("\n--- Transport Summary ---");
|
||||
const direct_count = env.payloads.filter(p => p.transport === "direct").length;
|
||||
const link_count = env.payloads.filter(p => p.transport === "link").length;
|
||||
log_trace(`Direct transport: ${direct_count} payloads`);
|
||||
log_trace(`Link transport: ${link_count} payloads`);
|
||||
}
|
||||
|
||||
// Run the test
|
||||
console.log("Starting mixed-content transport test...");
|
||||
console.log(`Correlation ID: ${correlation_id}`);
|
||||
|
||||
// Run sender
|
||||
console.log("start smartsend for mixed content");
|
||||
test_mix_send();
|
||||
|
||||
console.log("\nTest completed.");
|
||||
console.log("Note: Run test_js_to_js_mix_receiver.js to receive the messages.");
|
||||
173
test/test_js_mix_payloads_receiver.js
Normal file
173
test/test_js_mix_payloads_receiver.js
Normal file
@@ -0,0 +1,173 @@
|
||||
#!/usr/bin/env node
|
||||
// Test script for mixed-content message testing
|
||||
// Tests receiving a mix of text, json, table, image, audio, video, and binary data
|
||||
// from JavaScript serviceA to JavaScript serviceB using NATSBridge.js smartreceive
|
||||
//
|
||||
// This test demonstrates that any combination and any number of mixed content
|
||||
// can be sent and received correctly.
|
||||
|
||||
const { smartreceive, log_trace } = require('./src/NATSBridge');
|
||||
|
||||
// Configuration
|
||||
const SUBJECT = "/NATSBridge_mix_test";
|
||||
const NATS_URL = "nats.yiem.cc";
|
||||
|
||||
// Helper: Log with correlation ID
|
||||
function log_trace(message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] ${message}`);
|
||||
}
|
||||
|
||||
// Receiver: Listen for messages and verify mixed content handling
|
||||
async function test_mix_receive() {
|
||||
// Connect to NATS
|
||||
const { connect } = require('nats');
|
||||
const nc = await connect({ servers: [NATS_URL] });
|
||||
|
||||
// Subscribe to the subject
|
||||
const sub = nc.subscribe(SUBJECT);
|
||||
|
||||
for await (const msg of sub) {
|
||||
log_trace(`Received message on ${msg.subject}`);
|
||||
|
||||
// Use NATSBridge.smartreceive to handle the data
|
||||
const result = await smartreceive(
|
||||
msg,
|
||||
{
|
||||
maxRetries: 5,
|
||||
baseDelay: 100,
|
||||
maxDelay: 5000
|
||||
}
|
||||
);
|
||||
|
||||
log_trace(`Received ${result.payloads.length} payloads`);
|
||||
|
||||
// Result is an envelope dictionary with payloads field
|
||||
// Access payloads with result.payloads
|
||||
for (const { dataname, data, type } of result.payloads) {
|
||||
log_trace(`\n=== Payload: ${dataname} (type: ${type}) ===`);
|
||||
|
||||
// Handle different data types
|
||||
if (type === "text") {
|
||||
// Text data - should be a String
|
||||
if (typeof data === 'string') {
|
||||
log_trace(` Type: String`);
|
||||
log_trace(` Length: ${data.length} characters`);
|
||||
|
||||
// Display first 200 characters
|
||||
if (data.length > 200) {
|
||||
log_trace(` First 200 chars: ${data.substring(0, 200)}...`);
|
||||
} else {
|
||||
log_trace(` Content: ${data}`);
|
||||
}
|
||||
|
||||
// Save to file
|
||||
const fs = require('fs');
|
||||
const output_path = `./received_${dataname}.txt`;
|
||||
fs.writeFileSync(output_path, data);
|
||||
log_trace(` Saved to: ${output_path}`);
|
||||
} else {
|
||||
log_trace(` ERROR: Expected String, got ${typeof data}`);
|
||||
}
|
||||
|
||||
} else if (type === "dictionary") {
|
||||
// Dictionary data - should be an object
|
||||
if (typeof data === 'object' && data !== null && !Array.isArray(data)) {
|
||||
log_trace(` Type: Object`);
|
||||
log_trace(` Keys: ${Object.keys(data).join(', ')}`);
|
||||
|
||||
// Display nested content
|
||||
for (const [key, value] of Object.entries(data)) {
|
||||
log_trace(` ${key} => ${value}`);
|
||||
}
|
||||
|
||||
// Save to JSON file
|
||||
const fs = require('fs');
|
||||
const output_path = `./received_${dataname}.json`;
|
||||
const json_str = JSON.stringify(data, null, 2);
|
||||
fs.writeFileSync(output_path, json_str);
|
||||
log_trace(` Saved to: ${output_path}`);
|
||||
} else {
|
||||
log_trace(` ERROR: Expected Object, got ${typeof data}`);
|
||||
}
|
||||
|
||||
} else if (type === "table") {
|
||||
// Table data - should be an array of objects (requires apache-arrow)
|
||||
log_trace(` Type: Array (requires apache-arrow for full deserialization)`);
|
||||
if (Array.isArray(data)) {
|
||||
log_trace(` Length: ${data.length} items`);
|
||||
log_trace(` First item: ${JSON.stringify(data[0])}`);
|
||||
} else {
|
||||
log_trace(` ERROR: Expected Array, got ${typeof data}`);
|
||||
}
|
||||
|
||||
} else if (type === "image" || type === "audio" || type === "video" || type === "binary") {
|
||||
// Binary data - should be Uint8Array
|
||||
if (data instanceof Uint8Array || Array.isArray(data)) {
|
||||
log_trace(` Type: Uint8Array (binary)`);
|
||||
log_trace(` Size: ${data.length} bytes`);
|
||||
|
||||
// Save to file
|
||||
const fs = require('fs');
|
||||
const output_path = `./received_${dataname}.bin`;
|
||||
fs.writeFileSync(output_path, Buffer.from(data));
|
||||
log_trace(` Saved to: ${output_path}`);
|
||||
} else {
|
||||
log_trace(` ERROR: Expected Uint8Array, got ${typeof data}`);
|
||||
}
|
||||
|
||||
} else {
|
||||
log_trace(` ERROR: Unknown data type '${type}'`);
|
||||
}
|
||||
}
|
||||
|
||||
// Summary
|
||||
console.log("\n=== Verification Summary ===");
|
||||
const text_count = result.payloads.filter(x => x.type === "text").length;
|
||||
const dict_count = result.payloads.filter(x => x.type === "dictionary").length;
|
||||
const table_count = result.payloads.filter(x => x.type === "table").length;
|
||||
const image_count = result.payloads.filter(x => x.type === "image").length;
|
||||
const audio_count = result.payloads.filter(x => x.type === "audio").length;
|
||||
const video_count = result.payloads.filter(x => x.type === "video").length;
|
||||
const binary_count = result.payloads.filter(x => x.type === "binary").length;
|
||||
|
||||
log_trace(`Text payloads: ${text_count}`);
|
||||
log_trace(`Dictionary payloads: ${dict_count}`);
|
||||
log_trace(`Table payloads: ${table_count}`);
|
||||
log_trace(`Image payloads: ${image_count}`);
|
||||
log_trace(`Audio payloads: ${audio_count}`);
|
||||
log_trace(`Video payloads: ${video_count}`);
|
||||
log_trace(`Binary payloads: ${binary_count}`);
|
||||
|
||||
// Print transport type info for each payload if available
|
||||
console.log("\n=== Payload Details ===");
|
||||
for (const { dataname, data, type } of result.payloads) {
|
||||
if (["image", "audio", "video", "binary"].includes(type)) {
|
||||
log_trace(`${dataname}: ${data.length} bytes (binary)`);
|
||||
} else if (type === "table") {
|
||||
log_trace(`${dataname}: ${data.length} items (Array)`);
|
||||
} else if (type === "dictionary") {
|
||||
log_trace(`${dataname}: ${JSON.stringify(data).length} bytes (Object)`);
|
||||
} else if (type === "text") {
|
||||
log_trace(`${dataname}: ${data.length} characters (String)`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Keep listening for 2 minutes
|
||||
setTimeout(() => {
|
||||
nc.close();
|
||||
process.exit(0);
|
||||
}, 120000);
|
||||
}
|
||||
|
||||
// Run the test
|
||||
console.log("Starting mixed-content transport test...");
|
||||
console.log("Note: This receiver will wait for messages from the sender.");
|
||||
console.log("Run test_js_to_js_mix_sender.js first to send test data.");
|
||||
|
||||
// Run receiver
|
||||
console.log("\ntesting smartreceive for mixed content");
|
||||
test_mix_receive();
|
||||
|
||||
console.log("\nTest completed.");
|
||||
87
test/test_js_table_receiver.js
Normal file
87
test/test_js_table_receiver.js
Normal file
@@ -0,0 +1,87 @@
|
||||
#!/usr/bin/env node
|
||||
// Test script for Table transport testing
|
||||
// Tests receiving 1 large and 1 small Tables via direct and link transport
|
||||
// Uses NATSBridge.js smartreceive with "table" type
|
||||
//
|
||||
// Note: This test requires the apache-arrow library to deserialize table data.
|
||||
// The JavaScript implementation uses apache-arrow for Arrow IPC deserialization.
|
||||
|
||||
const { smartreceive, log_trace } = require('./src/NATSBridge');
|
||||
|
||||
// Configuration
|
||||
const SUBJECT = "/NATSBridge_table_test";
|
||||
const NATS_URL = "nats.yiem.cc";
|
||||
|
||||
// Helper: Log with correlation ID
|
||||
function log_trace(message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] ${message}`);
|
||||
}
|
||||
|
||||
// Receiver: Listen for messages and verify Table handling
|
||||
async function test_table_receive() {
|
||||
// Connect to NATS
|
||||
const { connect } = require('nats');
|
||||
const nc = await connect({ servers: [NATS_URL] });
|
||||
|
||||
// Subscribe to the subject
|
||||
const sub = nc.subscribe(SUBJECT);
|
||||
|
||||
for await (const msg of sub) {
|
||||
log_trace(`Received message on ${msg.subject}`);
|
||||
|
||||
// Use NATSBridge.smartreceive to handle the data
|
||||
const result = await smartreceive(
|
||||
msg,
|
||||
{
|
||||
maxRetries: 5,
|
||||
baseDelay: 100,
|
||||
maxDelay: 5000
|
||||
}
|
||||
);
|
||||
|
||||
// Result is an envelope dictionary with payloads field
|
||||
// Access payloads with result.payloads
|
||||
for (const { dataname, data, type } of result.payloads) {
|
||||
if (Array.isArray(data)) {
|
||||
log_trace(`Received Table '${dataname}' of type ${type}`);
|
||||
|
||||
// Display table contents
|
||||
console.log(` Dimensions: ${data.length} rows x ${data.length > 0 ? Object.keys(data[0]).length : 0} columns`);
|
||||
console.log(` Columns: ${data.length > 0 ? Object.keys(data[0]).join(', ') : ''}`);
|
||||
|
||||
// Display first few rows
|
||||
console.log(` First 5 rows:`);
|
||||
for (let i = 0; i < Math.min(5, data.length); i++) {
|
||||
console.log(` Row ${i}: ${JSON.stringify(data[i])}`);
|
||||
}
|
||||
|
||||
// Save to JSON file
|
||||
const fs = require('fs');
|
||||
const output_path = `./received_${dataname}.json`;
|
||||
const json_str = JSON.stringify(data, null, 2);
|
||||
fs.writeFileSync(output_path, json_str);
|
||||
log_trace(`Saved Table to ${output_path}`);
|
||||
} else {
|
||||
log_trace(`Received unexpected data type for '${dataname}': ${typeof data}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Keep listening for 10 seconds
|
||||
setTimeout(() => {
|
||||
nc.close();
|
||||
process.exit(0);
|
||||
}, 120000);
|
||||
}
|
||||
|
||||
// Run the test
|
||||
console.log("Starting Table transport test...");
|
||||
console.log("Note: This receiver will wait for messages from the sender.");
|
||||
console.log("Run test_js_to_js_table_sender.js first to send test data.");
|
||||
|
||||
// Run receiver
|
||||
console.log("testing smartreceive");
|
||||
test_table_receive();
|
||||
|
||||
console.log("Test completed.");
|
||||
165
test/test_js_table_sender.js
Normal file
165
test/test_js_table_sender.js
Normal file
@@ -0,0 +1,165 @@
|
||||
#!/usr/bin/env node
|
||||
// Test script for Table transport testing
|
||||
// Tests sending 1 large and 1 small Tables via direct and link transport
|
||||
// Uses NATSBridge.js smartsend with "table" type
|
||||
//
|
||||
// Note: This test requires the apache-arrow library to serialize/deserialize table data.
|
||||
// The JavaScript implementation uses apache-arrow for Arrow IPC serialization.
|
||||
|
||||
const { smartsend, uuid4, log_trace } = require('./src/NATSBridge');
|
||||
|
||||
// Configuration
|
||||
const SUBJECT = "/NATSBridge_table_test";
|
||||
const NATS_URL = "nats.yiem.cc";
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080";
|
||||
|
||||
// Create correlation ID for tracing
|
||||
const correlation_id = uuid4();
|
||||
|
||||
// Helper: Log with correlation ID
|
||||
function log_trace(message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] [Correlation: ${correlation_id}] ${message}`);
|
||||
}
|
||||
|
||||
// File upload handler for plik server
|
||||
async function plik_upload_handler(fileserver_url, dataname, data, correlation_id) {
|
||||
log_trace(correlation_id, `Uploading ${dataname} to fileserver: ${fileserver_url}`);
|
||||
|
||||
// Step 1: Get upload ID and token
|
||||
const url_getUploadID = `${fileserver_url}/upload`;
|
||||
const headers = {
|
||||
"Content-Type": "application/json"
|
||||
};
|
||||
const body = JSON.stringify({ OneShot: true });
|
||||
|
||||
let response = await fetch(url_getUploadID, {
|
||||
method: "POST",
|
||||
headers: headers,
|
||||
body: body
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to get upload ID: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const responseJson = await response.json();
|
||||
const uploadid = responseJson.id;
|
||||
const uploadtoken = responseJson.uploadToken;
|
||||
|
||||
// Step 2: Upload file data
|
||||
const url_upload = `${fileserver_url}/file/${uploadid}`;
|
||||
|
||||
// Create multipart form data
|
||||
const formData = new FormData();
|
||||
const blob = new Blob([data], { type: "application/octet-stream" });
|
||||
formData.append("file", blob, dataname);
|
||||
|
||||
response = await fetch(url_upload, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"X-UploadToken": uploadtoken
|
||||
},
|
||||
body: formData
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to upload file: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const fileResponseJson = await response.json();
|
||||
const fileid = fileResponseJson.id;
|
||||
|
||||
// Build the download URL
|
||||
const url = `${fileserver_url}/file/${uploadid}/${fileid}/${encodeURIComponent(dataname)}`;
|
||||
|
||||
log_trace(correlation_id, `Uploaded to URL: ${url}`);
|
||||
|
||||
return {
|
||||
status: response.status,
|
||||
uploadid: uploadid,
|
||||
fileid: fileid,
|
||||
url: url
|
||||
};
|
||||
}
|
||||
|
||||
// Sender: Send Tables via smartsend
|
||||
async function test_table_send() {
|
||||
// Note: This test requires apache-arrow library to create Arrow IPC data.
|
||||
// For now, we'll use a simple array of objects as table data.
|
||||
// In production, you would use the apache-arrow library to create Arrow IPC data.
|
||||
|
||||
// Create a small Table (will use direct transport)
|
||||
const small_table = [
|
||||
{ id: 1, name: "Alice", score: 95 },
|
||||
{ id: 2, name: "Bob", score: 88 },
|
||||
{ id: 3, name: "Charlie", score: 92 }
|
||||
];
|
||||
|
||||
// Create a large Table (will use link transport if > 1MB)
|
||||
// Generate a larger dataset (~2MB to ensure link transport)
|
||||
const large_table = [];
|
||||
for (let i = 0; i < 50000; i++) {
|
||||
large_table.push({
|
||||
id: i,
|
||||
message: `msg_${i}`,
|
||||
sender: `sender_${i}`,
|
||||
timestamp: new Date().toISOString(),
|
||||
priority: Math.floor(Math.random() * 3) + 1
|
||||
});
|
||||
}
|
||||
|
||||
// Test data 1: small Table
|
||||
const data1 = { dataname: "small_table", data: small_table, type: "table" };
|
||||
|
||||
// Test data 2: large Table
|
||||
const data2 = { dataname: "large_table", data: large_table, type: "table" };
|
||||
|
||||
// Use smartsend with table type
|
||||
// For small Table: will use direct transport (Arrow IPC encoded)
|
||||
// For large Table: will use link transport (uploaded to fileserver)
|
||||
const { env, env_json_str } = await smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2],
|
||||
{
|
||||
natsUrl: NATS_URL,
|
||||
fileserverUrl: FILESERVER_URL,
|
||||
fileserverUploadHandler: plik_upload_handler,
|
||||
sizeThreshold: 1_000_000,
|
||||
correlationId: correlation_id,
|
||||
msgPurpose: "chat",
|
||||
senderName: "table_sender",
|
||||
receiverName: "",
|
||||
receiverId: "",
|
||||
replyTo: "",
|
||||
replyToMsgId: "",
|
||||
isPublish: true // Publish the message to NATS
|
||||
}
|
||||
);
|
||||
|
||||
log_trace(`Sent message with ${env.payloads.length} payloads`);
|
||||
|
||||
// Log transport type for each payload
|
||||
for (let i = 0; i < env.payloads.length; i++) {
|
||||
const payload = env.payloads[i];
|
||||
log_trace(`Payload ${i + 1} ('${payload.dataname}'):`);
|
||||
log_trace(` Transport: ${payload.transport}`);
|
||||
log_trace(` Type: ${payload.type}`);
|
||||
log_trace(` Size: ${payload.size} bytes`);
|
||||
log_trace(` Encoding: ${payload.encoding}`);
|
||||
|
||||
if (payload.transport === "link") {
|
||||
log_trace(` URL: ${payload.data}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Run the test
|
||||
console.log("Starting Table transport test...");
|
||||
console.log(`Correlation ID: ${correlation_id}`);
|
||||
|
||||
// Run sender
|
||||
console.log("start smartsend for tables");
|
||||
test_table_send();
|
||||
|
||||
console.log("Test completed.");
|
||||
81
test/test_js_text_receiver.js
Normal file
81
test/test_js_text_receiver.js
Normal file
@@ -0,0 +1,81 @@
|
||||
#!/usr/bin/env node
|
||||
// Test script for text transport testing
|
||||
// Tests receiving 1 large and 1 small text from JavaScript serviceA to JavaScript serviceB
|
||||
// Uses NATSBridge.js smartreceive with "text" type
|
||||
|
||||
const { smartreceive, log_trace } = require('./src/NATSBridge');
|
||||
|
||||
// Configuration
|
||||
const SUBJECT = "/NATSBridge_text_test";
|
||||
const NATS_URL = "nats.yiem.cc";
|
||||
|
||||
// Helper: Log with correlation ID
|
||||
function log_trace(message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] ${message}`);
|
||||
}
|
||||
|
||||
// Receiver: Listen for messages and verify text handling
|
||||
async function test_text_receive() {
|
||||
// Connect to NATS
|
||||
const { connect } = require('nats');
|
||||
const nc = await connect({ servers: [NATS_URL] });
|
||||
|
||||
// Subscribe to the subject
|
||||
const sub = nc.subscribe(SUBJECT);
|
||||
|
||||
for await (const msg of sub) {
|
||||
log_trace(`Received message on ${msg.subject}`);
|
||||
|
||||
// Use NATSBridge.smartreceive to handle the data
|
||||
const result = await smartreceive(
|
||||
msg,
|
||||
{
|
||||
maxRetries: 5,
|
||||
baseDelay: 100,
|
||||
maxDelay: 5000
|
||||
}
|
||||
);
|
||||
|
||||
// Result is an envelope dictionary with payloads field
|
||||
// Access payloads with result.payloads
|
||||
for (const { dataname, data, type } of result.payloads) {
|
||||
if (typeof data === 'string') {
|
||||
log_trace(`Received text '${dataname}' of type ${type}`);
|
||||
log_trace(` Length: ${data.length} characters`);
|
||||
|
||||
// Display first 100 characters
|
||||
if (data.length > 100) {
|
||||
log_trace(` First 100 characters: ${data.substring(0, 100)}...`);
|
||||
} else {
|
||||
log_trace(` Content: ${data}`);
|
||||
}
|
||||
|
||||
// Save to file
|
||||
const fs = require('fs');
|
||||
const output_path = `./received_${dataname}.txt`;
|
||||
fs.writeFileSync(output_path, data);
|
||||
log_trace(`Saved text to ${output_path}`);
|
||||
} else {
|
||||
log_trace(`Received unexpected data type for '${dataname}': ${typeof data}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Keep listening for 10 seconds
|
||||
setTimeout(() => {
|
||||
nc.close();
|
||||
process.exit(0);
|
||||
}, 120000);
|
||||
}
|
||||
|
||||
// Run the test
|
||||
console.log("Starting text transport test...");
|
||||
console.log("Note: This receiver will wait for messages from the sender.");
|
||||
console.log("Run test_js_to_js_text_sender.js first to send test data.");
|
||||
|
||||
// Run receiver
|
||||
console.log("testing smartreceive for text");
|
||||
test_text_receive();
|
||||
|
||||
console.log("Test completed.");
|
||||
141
test/test_js_text_sender.js
Normal file
141
test/test_js_text_sender.js
Normal file
@@ -0,0 +1,141 @@
|
||||
#!/usr/bin/env node
|
||||
// Test script for text transport testing
|
||||
// Tests sending 1 large and 1 small text from JavaScript serviceA to JavaScript serviceB
|
||||
// Uses NATSBridge.js smartsend with "text" type
|
||||
|
||||
const { smartsend, uuid4, log_trace } = require('./src/NATSBridge');
|
||||
|
||||
// Configuration
|
||||
const SUBJECT = "/NATSBridge_text_test";
|
||||
const NATS_URL = "nats.yiem.cc";
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080";
|
||||
|
||||
// Create correlation ID for tracing
|
||||
const correlation_id = uuid4();
|
||||
|
||||
// Helper: Log with correlation ID
|
||||
function log_trace(message) {
|
||||
const timestamp = new Date().toISOString();
|
||||
console.log(`[${timestamp}] [Correlation: ${correlation_id}] ${message}`);
|
||||
}
|
||||
|
||||
// File upload handler for plik server
|
||||
async function plik_upload_handler(fileserver_url, dataname, data, correlation_id) {
|
||||
// Get upload ID
|
||||
const url_getUploadID = `${fileserver_url}/upload`;
|
||||
const headers = {
|
||||
"Content-Type": "application/json"
|
||||
};
|
||||
const body = JSON.stringify({ OneShot: true });
|
||||
|
||||
let response = await fetch(url_getUploadID, {
|
||||
method: "POST",
|
||||
headers: headers,
|
||||
body: body
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to get upload ID: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const responseJson = await response.json();
|
||||
const uploadid = responseJson.id;
|
||||
const uploadtoken = responseJson.uploadToken;
|
||||
|
||||
// Upload file
|
||||
const formData = new FormData();
|
||||
const blob = new Blob([data], { type: "application/octet-stream" });
|
||||
formData.append("file", blob, dataname);
|
||||
|
||||
response = await fetch(`${fileserver_url}/file/${uploadid}`, {
|
||||
method: "POST",
|
||||
headers: {
|
||||
"X-UploadToken": uploadtoken
|
||||
},
|
||||
body: formData
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
throw new Error(`Failed to upload file: ${response.status} ${response.statusText}`);
|
||||
}
|
||||
|
||||
const fileResponseJson = await response.json();
|
||||
const fileid = fileResponseJson.id;
|
||||
|
||||
const url = `${fileserver_url}/file/${uploadid}/${fileid}/${encodeURIComponent(dataname)}`;
|
||||
|
||||
return {
|
||||
status: response.status,
|
||||
uploadid: uploadid,
|
||||
fileid: fileid,
|
||||
url: url
|
||||
};
|
||||
}
|
||||
|
||||
// Sender: Send text via smartsend
|
||||
async function test_text_send() {
|
||||
// Create a small text (will use direct transport)
|
||||
const small_text = "Hello, this is a small text message. Testing direct transport via NATS.";
|
||||
|
||||
// Create a large text (will use link transport if > 1MB)
|
||||
// Generate a larger text (~2MB to ensure link transport)
|
||||
const large_text_lines = [];
|
||||
for (let i = 0; i < 50000; i++) {
|
||||
large_text_lines.push(`Line ${i}: This is a sample text line with some content to pad the size. `);
|
||||
}
|
||||
const large_text = large_text_lines.join("");
|
||||
|
||||
// Test data 1: small text
|
||||
const data1 = { dataname: "small_text", data: small_text, type: "text" };
|
||||
|
||||
// Test data 2: large text
|
||||
const data2 = { dataname: "large_text", data: large_text, type: "text" };
|
||||
|
||||
// Use smartsend with text type
|
||||
// For small text: will use direct transport (Base64 encoded UTF-8)
|
||||
// For large text: will use link transport (uploaded to fileserver)
|
||||
const { env, env_json_str } = await smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2],
|
||||
{
|
||||
natsUrl: NATS_URL,
|
||||
fileserverUrl: FILESERVER_URL,
|
||||
fileserverUploadHandler: plik_upload_handler,
|
||||
sizeThreshold: 1_000_000,
|
||||
correlationId: correlation_id,
|
||||
msgPurpose: "chat",
|
||||
senderName: "text_sender",
|
||||
receiverName: "",
|
||||
receiverId: "",
|
||||
replyTo: "",
|
||||
replyToMsgId: "",
|
||||
isPublish: true // Publish the message to NATS
|
||||
}
|
||||
);
|
||||
|
||||
log_trace(`Sent message with ${env.payloads.length} payloads`);
|
||||
|
||||
// Log transport type for each payload
|
||||
for (let i = 0; i < env.payloads.length; i++) {
|
||||
const payload = env.payloads[i];
|
||||
log_trace(`Payload ${i + 1} ('${payload.dataname}'):`);
|
||||
log_trace(` Transport: ${payload.transport}`);
|
||||
log_trace(` Type: ${payload.type}`);
|
||||
log_trace(` Size: ${payload.size} bytes`);
|
||||
log_trace(` Encoding: ${payload.encoding}`);
|
||||
|
||||
if (payload.transport === "link") {
|
||||
log_trace(` URL: ${payload.data}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Run the test
|
||||
console.log("Starting text transport test...");
|
||||
console.log(`Correlation ID: ${correlation_id}`);
|
||||
|
||||
// Run sender
|
||||
console.log("start smartsend for text");
|
||||
test_text_send();
|
||||
|
||||
console.log("Test completed.");
|
||||
82
test/test_julia_dict_receiver.jl
Normal file
82
test/test_julia_dict_receiver.jl
Normal file
@@ -0,0 +1,82 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for Dictionary transport testing
|
||||
# Tests receiving 1 large and 1 small Dictionaries via direct and link transport
|
||||
# Uses NATSBridge.jl smartreceive with "dictionary" type
|
||||
|
||||
using NATS, JSON, UUIDs, Dates, PrettyPrinting, DataFrames, Arrow, HTTP
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/NATSBridge_dict_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test dictionary transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] $message")
|
||||
end
|
||||
|
||||
|
||||
# Receiver: Listen for messages and verify Dictionary handling
|
||||
function test_dict_receive()
|
||||
conn = NATS.connect(NATS_URL)
|
||||
NATS.subscribe(conn, SUBJECT) do msg
|
||||
log_trace("Received message on $(msg.subject)")
|
||||
|
||||
# Use NATSBridge.smartreceive to handle the data
|
||||
# API: smartreceive(msg, download_handler; max_retries, base_delay, max_delay)
|
||||
result = NATSBridge.smartreceive(
|
||||
msg;
|
||||
max_retries = 5,
|
||||
base_delay = 100,
|
||||
max_delay = 5000
|
||||
)
|
||||
|
||||
# Result is an envelope dictionary with payloads field containing list of (dataname, data, data_type) tuples
|
||||
for (dataname, data, data_type) in result["payloads"]
|
||||
if isa(data, JSON.Object{String, Any})
|
||||
log_trace("Received Dictionary '$dataname' of type $data_type")
|
||||
|
||||
# Display dictionary contents
|
||||
println(" Contents:")
|
||||
for (key, value) in data
|
||||
println(" $key => $value")
|
||||
end
|
||||
|
||||
# Save to JSON file
|
||||
output_path = "./received_$dataname.json"
|
||||
json_str = JSON.json(data, 2)
|
||||
write(output_path, json_str)
|
||||
log_trace("Saved Dictionary to $output_path")
|
||||
else
|
||||
log_trace("Received unexpected data type for '$dataname': $(typeof(data))")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Keep listening for 10 seconds
|
||||
sleep(120)
|
||||
NATS.drain(conn)
|
||||
end
|
||||
|
||||
|
||||
# Run the test
|
||||
println("Starting Dictionary transport test...")
|
||||
println("Note: This receiver will wait for messages from the sender.")
|
||||
println("Run test_julia_to_julia_dict_sender.jl first to send test data.")
|
||||
|
||||
# Run receiver
|
||||
println("testing smartreceive")
|
||||
test_dict_receive()
|
||||
|
||||
println("Test completed.")
|
||||
137
test/test_julia_dict_sender.jl
Normal file
137
test/test_julia_dict_sender.jl
Normal file
@@ -0,0 +1,137 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for Dictionary transport testing
|
||||
# Tests sending 1 large and 1 small Dictionaries via direct and link transport
|
||||
# Uses NATSBridge.jl smartsend with "dictionary" type
|
||||
|
||||
using NATS, JSON, UUIDs, Dates, PrettyPrinting, DataFrames, Arrow, HTTP
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/NATSBridge_dict_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
# Create correlation ID for tracing
|
||||
correlation_id = string(uuid4())
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test dictionary transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] [Correlation: $correlation_id] $message")
|
||||
end
|
||||
|
||||
|
||||
# File upload handler for plik server
|
||||
function plik_upload_handler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
# Get upload ID
|
||||
url_getUploadID = "$fileserver_url/upload"
|
||||
headers = ["Content-Type" => "application/json"]
|
||||
body = """{ "OneShot" : true }"""
|
||||
httpResponse = HTTP.request("POST", url_getUploadID, headers, body; body_is_form=false)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
|
||||
# Upload file
|
||||
file_multipart = HTTP.Multipart(dataname, IOBuffer(data), "application/octet-stream")
|
||||
url_upload = "$fileserver_url/file/$uploadid"
|
||||
headers = ["X-UploadToken" => uploadtoken]
|
||||
|
||||
form = HTTP.Form(Dict("file" => file_multipart))
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
|
||||
fileid = responseJson["id"]
|
||||
url = "$fileserver_url/file/$uploadid/$fileid/$dataname"
|
||||
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
|
||||
# Sender: Send Dictionaries via smartsend
|
||||
function test_dict_send()
|
||||
# Create a small Dictionary (will use direct transport)
|
||||
small_dict = Dict(
|
||||
"name" => "Alice",
|
||||
"age" => 30,
|
||||
"scores" => [95, 88, 92],
|
||||
"metadata" => Dict(
|
||||
"height" => 155,
|
||||
"weight" => 55
|
||||
)
|
||||
)
|
||||
|
||||
# Create a large Dictionary (will use link transport if > 1MB)
|
||||
# Generate a larger dataset (~2MB to ensure link transport)
|
||||
large_dict = Dict(
|
||||
"ids" => collect(1:50000),
|
||||
"names" => ["User_$i" for i in 1:50000],
|
||||
"scores" => rand(1:100, 50000),
|
||||
"categories" => ["Category_$(rand(1:10))" for i in 1:50000],
|
||||
"metadata" => Dict(
|
||||
"source" => "test_generator",
|
||||
"timestamp" => string(Dates.now())
|
||||
)
|
||||
)
|
||||
|
||||
# Test data 1: small Dictionary
|
||||
data1 = ("small_dict", small_dict, "dictionary")
|
||||
|
||||
# Test data 2: large Dictionary
|
||||
data2 = ("large_dict", large_dict, "dictionary")
|
||||
|
||||
# Use smartsend with dictionary type
|
||||
# For small Dictionary: will use direct transport (JSON encoded)
|
||||
# For large Dictionary: will use link transport (uploaded to fileserver)
|
||||
env, env_json_str = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2]; # List of (dataname, data, type) tuples
|
||||
broker_url = NATS_URL,
|
||||
fileserver_url = FILESERVER_URL,
|
||||
fileserver_upload_handler = plik_upload_handler,
|
||||
size_threshold = 1_000_000, # 1MB threshold
|
||||
correlation_id = correlation_id,
|
||||
msg_purpose = "chat",
|
||||
sender_name = "dict_sender",
|
||||
receiver_name = "",
|
||||
receiver_id = "",
|
||||
reply_to = "",
|
||||
reply_to_msg_id = "",
|
||||
is_publish = true # Publish the message to NATS
|
||||
)
|
||||
|
||||
log_trace("Sent message with $(length(env.payloads)) payloads")
|
||||
|
||||
# Log transport type for each payload
|
||||
for (i, payload) in enumerate(env.payloads)
|
||||
log_trace("Payload $i ('$payload.dataname'):")
|
||||
log_trace(" Transport: $(payload.transport)")
|
||||
log_trace(" Type: $(payload.payload_type)")
|
||||
log_trace(" Size: $(payload.size) bytes")
|
||||
log_trace(" Encoding: $(payload.encoding)")
|
||||
|
||||
if payload.transport == "link"
|
||||
log_trace(" URL: $(payload.data)")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
|
||||
# Run the test
|
||||
println("Starting Dictionary transport test...")
|
||||
println("Correlation ID: $correlation_id")
|
||||
|
||||
# Run sender
|
||||
println("start smartsend for dictionaries")
|
||||
test_dict_send()
|
||||
|
||||
println("Test completed.")
|
||||
84
test/test_julia_file_receiver.jl
Normal file
84
test/test_julia_file_receiver.jl
Normal file
@@ -0,0 +1,84 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for large payload testing using binary transport
|
||||
# Tests sending a large file (> 1MB) via smartsend with binary type
|
||||
# Updated to match NATSBridge.jl API
|
||||
|
||||
using NATS, JSON, UUIDs, Dates, PrettyPrinting, DataFrames, Arrow, HTTP
|
||||
|
||||
|
||||
# workdir =
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/NATSBridge_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test file transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] $message")
|
||||
end
|
||||
|
||||
# Receiver: Listen for messages and verify large payload handling
|
||||
function test_large_binary_receive()
|
||||
conn = NATS.connect(NATS_URL)
|
||||
NATS.subscribe(conn, SUBJECT) do msg
|
||||
log_trace("Received message on $(msg.subject)")
|
||||
|
||||
# Use NATSBridge.smartreceive to handle the data
|
||||
# API: smartreceive(msg, download_handler; max_retries, base_delay, max_delay)
|
||||
result = NATSBridge.smartreceive(
|
||||
msg;
|
||||
max_retries = 5,
|
||||
base_delay = 100,
|
||||
max_delay = 5000
|
||||
)
|
||||
|
||||
# Result is an envelope dictionary with payloads field containing list of (dataname, data, data_type) tuples
|
||||
for (dataname, data, data_type) in result["payloads"]
|
||||
# Check transport type from the envelope
|
||||
# For link transport, data is the URL string
|
||||
# For direct transport, data is the actual payload bytes
|
||||
|
||||
if isa(data, Vector{UInt8})
|
||||
file_size = length(data)
|
||||
log_trace("Received $(file_size) bytes of binary data for '$dataname' of type $data_type")
|
||||
|
||||
# Save received data to a test file
|
||||
output_path = "./new_$dataname"
|
||||
write(output_path, data)
|
||||
log_trace("Saved received data to $output_path")
|
||||
else
|
||||
log_trace("Received $(file_size) bytes of binary data for '$dataname' of type $data_type")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Keep listening for 10 seconds
|
||||
sleep(120)
|
||||
NATS.drain(conn)
|
||||
end
|
||||
|
||||
|
||||
# Run the test
|
||||
println("Starting large binary payload test...")
|
||||
|
||||
# # Run sender first
|
||||
# println("start smartsend")
|
||||
# test_large_binary_send()
|
||||
|
||||
# Run receiver
|
||||
println("testing smartreceive")
|
||||
test_large_binary_receive()
|
||||
|
||||
println("Test completed.")
|
||||
123
test/test_julia_file_sender.jl
Normal file
123
test/test_julia_file_sender.jl
Normal file
@@ -0,0 +1,123 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for large payload testing using binary transport
|
||||
# Tests sending a large file (> 1MB) via smartsend with binary type
|
||||
# Updated to match NATSBridge.jl API
|
||||
|
||||
using NATS, JSON, UUIDs, Dates, PrettyPrinting, DataFrames, Arrow, HTTP
|
||||
|
||||
|
||||
# workdir =
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/NATSBridge_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
# Create correlation ID for tracing
|
||||
correlation_id = string(uuid4())
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test file transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] [Correlation: $correlation_id] $message")
|
||||
end
|
||||
|
||||
# File upload handler for plik server
|
||||
function plik_upload_handler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
# Get upload ID
|
||||
url_getUploadID = "$fileserver_url/upload"
|
||||
headers = ["Content-Type" => "application/json"]
|
||||
body = """{ "OneShot" : true }"""
|
||||
httpResponse = HTTP.request("POST", url_getUploadID, headers, body; body_is_form=false)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
|
||||
# Upload file
|
||||
file_multipart = HTTP.Multipart(dataname, IOBuffer(data), "application/octet-stream")
|
||||
url_upload = "$fileserver_url/file/$uploadid"
|
||||
headers = ["X-UploadToken" => uploadtoken]
|
||||
|
||||
form = HTTP.Form(Dict("file" => file_multipart))
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
|
||||
fileid = responseJson["id"]
|
||||
url = "$fileserver_url/file/$uploadid/$fileid/$dataname"
|
||||
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
# Sender: Send large binary file via smartsend
|
||||
function test_large_binary_send()
|
||||
# Read the large file as binary data
|
||||
|
||||
# test data 1
|
||||
file_path1 = "./testFile_large.zip"
|
||||
file_data1 = read(file_path1)
|
||||
filename1 = basename(file_path1)
|
||||
data1 = (filename1, file_data1, "binary")
|
||||
|
||||
# test data 2
|
||||
file_path2 = "./testFile_small.zip"
|
||||
file_data2 = read(file_path2)
|
||||
filename2 = basename(file_path2)
|
||||
data2 = (filename2, file_data2, "binary")
|
||||
|
||||
|
||||
|
||||
# Use smartsend with binary type - will automatically use link transport
|
||||
# if file size exceeds the threshold (1MB by default)
|
||||
# API: smartsend(subject, [(dataname, data, type), ...]; keywords...)
|
||||
env, env_json_str = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2]; # List of (dataname, data, type) tuples
|
||||
broker_url = NATS_URL;
|
||||
fileserver_url = FILESERVER_URL,
|
||||
fileserver_upload_handler = plik_upload_handler,
|
||||
size_threshold = 1_000_000,
|
||||
correlation_id = correlation_id,
|
||||
msg_purpose = "chat",
|
||||
sender_name = "sender",
|
||||
receiver_name = "",
|
||||
receiver_id = "",
|
||||
reply_to = "",
|
||||
reply_to_msg_id = "",
|
||||
is_publish = true # Publish the message to NATS
|
||||
)
|
||||
|
||||
log_trace("Sent message with transport: $(env.payloads[1].transport)")
|
||||
log_trace("Envelope type: $(env.payloads[1].payload_type)")
|
||||
|
||||
# Check if link transport was used
|
||||
if env.payloads[1].transport == "link"
|
||||
log_trace("Using link transport - file uploaded to HTTP server")
|
||||
log_trace("URL: $(env.payloads[1].data)")
|
||||
else
|
||||
log_trace("Using direct transport - payload sent via NATS")
|
||||
end
|
||||
end
|
||||
|
||||
# Run the test
|
||||
println("Starting large binary payload test...")
|
||||
println("Correlation ID: $correlation_id")
|
||||
|
||||
# Run sender first
|
||||
println("start smartsend")
|
||||
test_large_binary_send()
|
||||
|
||||
# Run receiver
|
||||
# println("testing smartreceive")
|
||||
# test_large_binary_receive()
|
||||
|
||||
println("Test completed.")
|
||||
@@ -1,190 +0,0 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for large payload testing using binary transport
|
||||
# Tests sending a large file (> 1MB) via smartsend with binary type
|
||||
|
||||
using NATS, JSON, UUIDs, Dates
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/large_binary_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
# Create correlation ID for tracing
|
||||
correlation_id = string(uuid4())
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test file transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
# File path for large binary payload test
|
||||
const FILE_PATH = "./testFile_small.zip"
|
||||
const filename = basename(FILE_PATH)
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] [Correlation: $correlation_id] $message")
|
||||
end
|
||||
|
||||
# Sender: Send large binary file via smartsend
|
||||
function test_large_binary_send()
|
||||
conn = NATS.connect(NATS_URL)
|
||||
# Read the large file as binary data
|
||||
log_trace("Reading large file: $FILE_PATH")
|
||||
file_data = read(FILE_PATH)
|
||||
|
||||
file_size = length(file_data)
|
||||
log_trace("File size: $file_size bytes")
|
||||
|
||||
# Use smartsend with binary type - will automatically use link transport
|
||||
# if file size exceeds the threshold (1MB by default)
|
||||
env = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
file_data,
|
||||
"binary",
|
||||
nats_url = NATS_URL,
|
||||
fileserver_url = FILESERVER_URL;
|
||||
dataname=filename
|
||||
)
|
||||
|
||||
log_trace("Sent message with transport: $(env.transport)")
|
||||
log_trace("Envelope type: $(env.type)")
|
||||
|
||||
# Check if link transport was used
|
||||
if env.transport == "link"
|
||||
log_trace("Using link transport - file uploaded to HTTP server")
|
||||
log_trace("URL: $(env.url)")
|
||||
else
|
||||
log_trace("Using direct transport - payload sent via NATS")
|
||||
end
|
||||
|
||||
NATS.drain(conn)
|
||||
end
|
||||
|
||||
# Receiver: Listen for messages and verify large payload handling
|
||||
function test_large_binary_receive()
|
||||
conn = NATS.connect(NATS_URL)
|
||||
NATS.subscribe(conn, SUBJECT) do msg
|
||||
log_trace("Received message on $(msg.subject)")
|
||||
|
||||
# Use NATSBridge.smartreceive to handle the data
|
||||
result = NATSBridge.smartreceive(msg)
|
||||
# Check transport type
|
||||
if result.envelope.transport == "direct"
|
||||
log_trace("Received direct transport")
|
||||
else
|
||||
# For link transport, result.data is the URL
|
||||
log_trace("Received link transport")
|
||||
end
|
||||
|
||||
# Verify the received data matches the original
|
||||
if result.envelope.type == "binary"
|
||||
if isa(result.data, Vector{UInt8})
|
||||
file_size = length(result.data)
|
||||
log_trace("Received $(file_size) bytes of binary data")
|
||||
|
||||
# Save received data to a test file
|
||||
println("metadata ", result.envelope.metadata)
|
||||
dataname = result.envelope.metadata["dataname"]
|
||||
if dataname != "NA"
|
||||
output_path = "./new_$dataname"
|
||||
write(output_path, result.data)
|
||||
log_trace("Saved received data to $output_path")
|
||||
end
|
||||
|
||||
# Verify file size
|
||||
original_size = length(read(FILE_PATH))
|
||||
if file_size == result.envelope.metadata["content_length"]
|
||||
log_trace("SUCCESS: File size matches! Original: $(result.envelope.metadata["content_length"]) bytes")
|
||||
else
|
||||
log_trace("WARNING: File size mismatch! Original: $(result.envelope.metadata["content_length"]), Received: $file_size")
|
||||
end
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Keep listening for 10 seconds
|
||||
sleep(120)
|
||||
NATS.drain(conn)
|
||||
end
|
||||
|
||||
|
||||
# Run the test
|
||||
println("Starting large binary payload test...")
|
||||
println("Correlation ID: $correlation_id")
|
||||
println("File: $FILE_PATH")
|
||||
|
||||
# Run sender first
|
||||
println("start smartsend")
|
||||
test_large_binary_send()
|
||||
|
||||
# # Run receiver
|
||||
# println("testing smartreceive")
|
||||
# test_large_binary_receive()
|
||||
|
||||
println("Test completed.")
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
228
test/test_julia_mix_payloads_receiver.jl
Normal file
228
test/test_julia_mix_payloads_receiver.jl
Normal file
@@ -0,0 +1,228 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for mixed-content message testing
|
||||
# Tests receiving a mix of text, json, table, image, audio, video, and binary data
|
||||
# from Julia serviceA to Julia serviceB using NATSBridge.jl smartreceive
|
||||
#
|
||||
# This test demonstrates that any combination and any number of mixed content
|
||||
# can be sent and received correctly.
|
||||
|
||||
using NATS, JSON, UUIDs, Dates, PrettyPrinting, DataFrames, Arrow, HTTP, Base64
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/NATSBridge_mix_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test mixed content transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] $message")
|
||||
end
|
||||
|
||||
|
||||
# Receiver: Listen for messages and verify mixed content handling
|
||||
function test_mix_receive()
|
||||
conn = NATS.connect(NATS_URL)
|
||||
NATS.subscribe(conn, SUBJECT) do msg
|
||||
log_trace("Received message on $(msg.subject)")
|
||||
|
||||
# Use NATSBridge.smartreceive to handle the data
|
||||
# API: smartreceive(msg, download_handler; max_retries, base_delay, max_delay)
|
||||
result = NATSBridge.smartreceive(
|
||||
msg;
|
||||
max_retries = 5,
|
||||
base_delay = 100,
|
||||
max_delay = 5000
|
||||
)
|
||||
|
||||
log_trace("Received $(length(result["payloads"])) payloads")
|
||||
|
||||
# Result is an envelope dictionary with payloads field containing list of (dataname, data, data_type) tuples
|
||||
for (dataname, data, data_type) in result["payloads"]
|
||||
log_trace("\n=== Payload: $dataname (type: $data_type) ===")
|
||||
|
||||
# Handle different data types
|
||||
if data_type == "text"
|
||||
# Text data - should be a String
|
||||
if isa(data, String)
|
||||
log_trace(" Type: String")
|
||||
log_trace(" Length: $(length(data)) characters")
|
||||
|
||||
# Display first 200 characters
|
||||
if length(data) > 200
|
||||
log_trace(" First 200 chars: $(data[1:200])...")
|
||||
else
|
||||
log_trace(" Content: $data")
|
||||
end
|
||||
|
||||
# Save to file
|
||||
output_path = "./received_$dataname.txt"
|
||||
write(output_path, data)
|
||||
log_trace(" Saved to: $output_path")
|
||||
else
|
||||
log_trace(" ERROR: Expected String, got $(typeof(data))")
|
||||
end
|
||||
|
||||
elseif data_type == "dictionary"
|
||||
# Dictionary data - should be JSON object
|
||||
if isa(data, JSON.Object{String, Any})
|
||||
log_trace(" Type: Dict")
|
||||
log_trace(" Keys: $(keys(data))")
|
||||
|
||||
# Display nested content
|
||||
for (key, value) in data
|
||||
log_trace(" $key => $value")
|
||||
end
|
||||
|
||||
# Save to JSON file
|
||||
output_path = "./received_$dataname.json"
|
||||
json_str = JSON.json(data, 2)
|
||||
write(output_path, json_str)
|
||||
log_trace(" Saved to: $output_path")
|
||||
else
|
||||
log_trace(" ERROR: Expected Dict, got $(typeof(data))")
|
||||
end
|
||||
|
||||
elseif data_type == "table"
|
||||
# Table data - should be a DataFrame
|
||||
data = DataFrame(data)
|
||||
if isa(data, DataFrame)
|
||||
log_trace(" Type: DataFrame")
|
||||
log_trace(" Dimensions: $(size(data, 1)) rows x $(size(data, 2)) columns")
|
||||
log_trace(" Columns: $(names(data))")
|
||||
|
||||
# Display first few rows
|
||||
log_trace(" First 5 rows:")
|
||||
display(data[1:min(5, size(data, 1)), :])
|
||||
|
||||
# Save to Arrow file
|
||||
output_path = "./received_$dataname.arrow"
|
||||
io = IOBuffer()
|
||||
Arrow.write(io, data)
|
||||
write(output_path, take!(io))
|
||||
log_trace(" Saved to: $output_path")
|
||||
else
|
||||
log_trace(" ERROR: Expected DataFrame, got $(typeof(data))")
|
||||
end
|
||||
|
||||
elseif data_type == "image"
|
||||
# Image data - should be Vector{UInt8}
|
||||
if isa(data, Vector{UInt8})
|
||||
log_trace(" Type: Vector{UInt8} (binary)")
|
||||
log_trace(" Size: $(length(data)) bytes")
|
||||
|
||||
# Save to file
|
||||
output_path = "./received_$dataname.bin"
|
||||
write(output_path, data)
|
||||
log_trace(" Saved to: $output_path")
|
||||
else
|
||||
log_trace(" ERROR: Expected Vector{UInt8}, got $(typeof(data))")
|
||||
end
|
||||
|
||||
elseif data_type == "audio"
|
||||
# Audio data - should be Vector{UInt8}
|
||||
if isa(data, Vector{UInt8})
|
||||
log_trace(" Type: Vector{UInt8} (binary)")
|
||||
log_trace(" Size: $(length(data)) bytes")
|
||||
|
||||
# Save to file
|
||||
output_path = "./received_$dataname.bin"
|
||||
write(output_path, data)
|
||||
log_trace(" Saved to: $output_path")
|
||||
else
|
||||
log_trace(" ERROR: Expected Vector{UInt8}, got $(typeof(data))")
|
||||
end
|
||||
|
||||
elseif data_type == "video"
|
||||
# Video data - should be Vector{UInt8}
|
||||
if isa(data, Vector{UInt8})
|
||||
log_trace(" Type: Vector{UInt8} (binary)")
|
||||
log_trace(" Size: $(length(data)) bytes")
|
||||
|
||||
# Save to file
|
||||
output_path = "./received_$dataname.bin"
|
||||
write(output_path, data)
|
||||
log_trace(" Saved to: $output_path")
|
||||
else
|
||||
log_trace(" ERROR: Expected Vector{UInt8}, got $(typeof(data))")
|
||||
end
|
||||
|
||||
elseif data_type == "binary"
|
||||
# Binary data - should be Vector{UInt8}
|
||||
if isa(data, Vector{UInt8})
|
||||
log_trace(" Type: Vector{UInt8} (binary)")
|
||||
log_trace(" Size: $(length(data)) bytes")
|
||||
|
||||
# Save to file
|
||||
output_path = "./received_$dataname.bin"
|
||||
write(output_path, data)
|
||||
log_trace(" Saved to: $output_path")
|
||||
else
|
||||
log_trace(" ERROR: Expected Vector{UInt8}, got $(typeof(data))")
|
||||
end
|
||||
|
||||
else
|
||||
log_trace(" ERROR: Unknown data type '$data_type'")
|
||||
end
|
||||
end
|
||||
|
||||
# Summary
|
||||
println("\n=== Verification Summary ===")
|
||||
text_count = count(x -> x[3] == "text", result["payloads"])
|
||||
dict_count = count(x -> x[3] == "dictionary", result["payloads"])
|
||||
table_count = count(x -> x[3] == "table", result["payloads"])
|
||||
image_count = count(x -> x[3] == "image", result["payloads"])
|
||||
audio_count = count(x -> x[3] == "audio", result["payloads"])
|
||||
video_count = count(x -> x[3] == "video", result["payloads"])
|
||||
binary_count = count(x -> x[3] == "binary", result["payloads"])
|
||||
|
||||
log_trace("Text payloads: $text_count")
|
||||
log_trace("Dictionary payloads: $dict_count")
|
||||
log_trace("Table payloads: $table_count")
|
||||
log_trace("Image payloads: $image_count")
|
||||
log_trace("Audio payloads: $audio_count")
|
||||
log_trace("Video payloads: $video_count")
|
||||
log_trace("Binary payloads: $binary_count")
|
||||
|
||||
# Print transport type info for each payload if available
|
||||
println("\n=== Payload Details ===")
|
||||
for (dataname, data, data_type) in result["payloads"]
|
||||
if data_type in ["image", "audio", "video", "binary"]
|
||||
log_trace("$dataname: $(length(data)) bytes (binary)")
|
||||
elseif data_type == "table"
|
||||
data = DataFrame(data)
|
||||
log_trace("$dataname: $(size(data, 1)) rows x $(size(data, 2)) columns (DataFrame)")
|
||||
elseif data_type == "dictionary"
|
||||
log_trace("$dataname: $(length(JSON.json(data))) bytes (Dict)")
|
||||
elseif data_type == "text"
|
||||
log_trace("$dataname: $(length(data)) characters (String)")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Keep listening for 2 minutes
|
||||
sleep(120)
|
||||
NATS.drain(conn)
|
||||
end
|
||||
|
||||
|
||||
# Run the test
|
||||
println("Starting mixed-content transport test...")
|
||||
println("Note: This receiver will wait for messages from the sender.")
|
||||
println("Run test_julia_to_julia_mix_sender.jl first to send test data.")
|
||||
|
||||
# Run receiver
|
||||
println("\ntesting smartreceive for mixed content")
|
||||
test_mix_receive()
|
||||
|
||||
println("\nTest completed.")
|
||||
239
test/test_julia_mix_payloads_sender.jl
Normal file
239
test/test_julia_mix_payloads_sender.jl
Normal file
@@ -0,0 +1,239 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for mixed-content message testing
|
||||
# Tests sending a mix of text, json, table, image, audio, video, and binary data
|
||||
# from Julia serviceA to Julia serviceB using NATSBridge.jl smartsend
|
||||
#
|
||||
# This test demonstrates that any combination and any number of mixed content
|
||||
# can be sent and received correctly.
|
||||
|
||||
using NATS, JSON, UUIDs, Dates, PrettyPrinting, DataFrames, Arrow, HTTP, Base64
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/NATSBridge_mix_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
# Create correlation ID for tracing
|
||||
correlation_id = string(uuid4())
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test mixed content transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] [Correlation: $correlation_id] $message")
|
||||
end
|
||||
|
||||
|
||||
# File upload handler for plik server
|
||||
function plik_upload_handler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
# Get upload ID
|
||||
url_getUploadID = "$fileserver_url/upload"
|
||||
headers = ["Content-Type" => "application/json"]
|
||||
body = """{ "OneShot" : true }"""
|
||||
httpResponse = HTTP.request("POST", url_getUploadID, headers, body; body_is_form=false)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
|
||||
# Upload file
|
||||
file_multipart = HTTP.Multipart(dataname, IOBuffer(data), "application/octet-stream")
|
||||
url_upload = "$fileserver_url/file/$uploadid"
|
||||
headers = ["X-UploadToken" => uploadtoken]
|
||||
|
||||
form = HTTP.Form(Dict("file" => file_multipart))
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
|
||||
fileid = responseJson["id"]
|
||||
url = "$fileserver_url/file/$uploadid/$fileid/$dataname"
|
||||
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
|
||||
# Helper: Create sample data for each type
|
||||
function create_sample_data()
|
||||
# Text data (small - direct transport)
|
||||
text_data = "Hello! This is a test chat message. 🎉\nHow are you doing today? 😊"
|
||||
|
||||
# Dictionary/JSON data (medium - could be direct or link)
|
||||
dict_data = Dict(
|
||||
"type" => "chat",
|
||||
"sender" => "serviceA",
|
||||
"receiver" => "serviceB",
|
||||
"metadata" => Dict(
|
||||
"timestamp" => string(Dates.now()),
|
||||
"priority" => "high",
|
||||
"tags" => ["urgent", "chat", "test"]
|
||||
),
|
||||
"content" => Dict(
|
||||
"text" => "This is a JSON-formatted chat message with nested structure.",
|
||||
"format" => "markdown",
|
||||
"mentions" => ["user1", "user2"]
|
||||
)
|
||||
)
|
||||
|
||||
# Table data (DataFrame - small - direct transport)
|
||||
table_data_small = DataFrame(
|
||||
id = 1:10,
|
||||
message = ["msg_$i" for i in 1:10],
|
||||
sender = ["sender_$i" for i in 1:10],
|
||||
timestamp = [string(Dates.now()) for _ in 1:10],
|
||||
priority = rand(1:3, 10)
|
||||
)
|
||||
|
||||
# Table data (DataFrame - large - link transport)
|
||||
# ~1.5MB of data (150,000 rows) - should trigger link transport
|
||||
table_data_large = DataFrame(
|
||||
id = 1:150_000,
|
||||
message = ["msg_$i" for i in 1:150_000],
|
||||
sender = ["sender_$i" for i in 1:150_000],
|
||||
timestamp = [string(Dates.now()) for i in 1:150_000],
|
||||
priority = rand(1:3, 150_000)
|
||||
)
|
||||
|
||||
# Image data (small binary - direct transport)
|
||||
# Create a simple 10x10 pixel PNG-like data (128 bytes header + 100 pixels = 112 bytes)
|
||||
# Using simple RGB data (10*10*3 = 300 bytes of pixel data)
|
||||
image_width = 10
|
||||
image_height = 10
|
||||
image_data = UInt8[]
|
||||
# PNG header (simplified)
|
||||
push!(image_data, 0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A)
|
||||
# Simple RGB data (RGBRGBRGB...)
|
||||
for i in 1:image_width*image_height
|
||||
push!(image_data, 0xFF, 0x00, 0x00) # Red pixel
|
||||
end
|
||||
|
||||
# Image data (large - link transport)
|
||||
# Create a larger image (~1.5MB) to test link transport
|
||||
large_image_width = 500
|
||||
large_image_height = 1000
|
||||
large_image_data = UInt8[]
|
||||
# PNG header (simplified for 500x1000)
|
||||
push!(large_image_data, 0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A)
|
||||
# RGB data (500*1000*3 = 1,500,000 bytes)
|
||||
for i in 1:large_image_width*large_image_height
|
||||
push!(large_image_data, rand(1:255), rand(1:255), rand(1:255)) # Random color pixels
|
||||
end
|
||||
|
||||
# Audio data (small binary - direct transport)
|
||||
audio_data = UInt8[rand(1:255) for _ in 1:100]
|
||||
|
||||
# Audio data (large - link transport)
|
||||
# ~1.5MB of audio-like data
|
||||
large_audio_data = UInt8[rand(1:255) for _ in 1:1_500_000]
|
||||
|
||||
# Video data (small binary - direct transport)
|
||||
video_data = UInt8[rand(1:255) for _ in 1:150]
|
||||
|
||||
# Video data (large - link transport)
|
||||
# ~1.5MB of video-like data
|
||||
large_video_data = UInt8[rand(1:255) for _ in 1:1_500_000]
|
||||
|
||||
# Binary data (small - direct transport)
|
||||
binary_data = UInt8[rand(1:255) for _ in 1:200]
|
||||
|
||||
# Binary data (large - link transport)
|
||||
# ~1.5MB of binary data
|
||||
large_binary_data = UInt8[rand(1:255) for _ in 1:1_500_000]
|
||||
|
||||
return (
|
||||
text_data,
|
||||
dict_data,
|
||||
table_data_small,
|
||||
table_data_large,
|
||||
image_data,
|
||||
large_image_data,
|
||||
audio_data,
|
||||
large_audio_data,
|
||||
video_data,
|
||||
large_video_data,
|
||||
binary_data,
|
||||
large_binary_data
|
||||
)
|
||||
end
|
||||
|
||||
|
||||
# Sender: Send mixed content via smartsend
|
||||
function test_mix_send()
|
||||
# Create sample data
|
||||
(text_data, dict_data, table_data_small, table_data_large, image_data, large_image_data, audio_data, large_audio_data, video_data, large_video_data, binary_data, large_binary_data) = create_sample_data()
|
||||
|
||||
# Create payloads list - mixed content with both small and large data
|
||||
# Small data uses direct transport, large data uses link transport
|
||||
payloads = [
|
||||
# Small data (direct transport) - text, dictionary, small table
|
||||
("chat_text", text_data, "text"),
|
||||
("chat_json", dict_data, "dictionary"),
|
||||
("chat_table_small", table_data_small, "table"),
|
||||
|
||||
# Large data (link transport) - large table, large image, large audio, large video, large binary
|
||||
("chat_table_large", table_data_large, "table"),
|
||||
("user_image_large", large_image_data, "image"),
|
||||
("audio_clip_large", large_audio_data, "audio"),
|
||||
("video_clip_large", large_video_data, "video"),
|
||||
("binary_file_large", large_binary_data, "binary")
|
||||
]
|
||||
|
||||
# Use smartsend with mixed content
|
||||
env, env_json_str = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
payloads; # List of (dataname, data, type) tuples
|
||||
broker_url = NATS_URL,
|
||||
fileserver_url = FILESERVER_URL,
|
||||
fileserver_upload_handler = plik_upload_handler,
|
||||
size_threshold = 1_000_000, # 1MB threshold
|
||||
correlation_id = correlation_id,
|
||||
msg_purpose = "chat",
|
||||
sender_name = "mix_sender",
|
||||
receiver_name = "",
|
||||
receiver_id = "",
|
||||
reply_to = "",
|
||||
reply_to_msg_id = "",
|
||||
is_publish = true # Publish the message to NATS
|
||||
)
|
||||
|
||||
log_trace("Sent message with $(length(env.payloads)) payloads")
|
||||
|
||||
# Log transport type for each payload
|
||||
for (i, payload) in enumerate(env.payloads)
|
||||
log_trace("Payload $i ('$payload.dataname'):")
|
||||
log_trace(" Transport: $(payload.transport)")
|
||||
log_trace(" Type: $(payload.payload_type)")
|
||||
log_trace(" Size: $(payload.size) bytes")
|
||||
log_trace(" Encoding: $(payload.encoding)")
|
||||
|
||||
if payload.transport == "link"
|
||||
log_trace(" URL: $(payload.data)")
|
||||
end
|
||||
end
|
||||
|
||||
# Summary
|
||||
println("\n--- Transport Summary ---")
|
||||
direct_count = count(p -> p.transport == "direct", env.payloads)
|
||||
link_count = count(p -> p.transport == "link", env.payloads)
|
||||
log_trace("Direct transport: $direct_count payloads")
|
||||
log_trace("Link transport: $link_count payloads")
|
||||
end
|
||||
|
||||
|
||||
# Run the test
|
||||
println("Starting mixed-content transport test...")
|
||||
println("Correlation ID: $correlation_id")
|
||||
|
||||
# Run sender
|
||||
println("start smartsend for mixed content")
|
||||
test_mix_send()
|
||||
|
||||
println("\nTest completed.")
|
||||
println("Note: Run test_julia_to_julia_mix_receiver.jl to receive the messages.")
|
||||
84
test/test_julia_table_receiver.jl
Normal file
84
test/test_julia_table_receiver.jl
Normal file
@@ -0,0 +1,84 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for DataFrame table transport testing
|
||||
# Tests receiving 1 large and 1 small DataFrames via direct and link transport
|
||||
# Uses NATSBridge.jl smartreceive with "table" type
|
||||
|
||||
using NATS, JSON, UUIDs, Dates, PrettyPrinting, DataFrames, Arrow, HTTP
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/NATSBridge_table_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test table transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] $message")
|
||||
end
|
||||
|
||||
|
||||
# Receiver: Listen for messages and verify DataFrame table handling
|
||||
function test_table_receive()
|
||||
conn = NATS.connect(NATS_URL)
|
||||
NATS.subscribe(conn, SUBJECT) do msg
|
||||
log_trace("Received message on $(msg.subject)")
|
||||
|
||||
# Use NATSBridge.smartreceive to handle the data
|
||||
# API: smartreceive(msg, download_handler; max_retries, base_delay, max_delay)
|
||||
result = NATSBridge.smartreceive(
|
||||
msg;
|
||||
max_retries = 5,
|
||||
base_delay = 100,
|
||||
max_delay = 5000
|
||||
)
|
||||
|
||||
# Result is an envelope dictionary with payloads field containing list of (dataname, data, data_type) tuples
|
||||
for (dataname, data, data_type) in result["payloads"]
|
||||
data = DataFrame(data)
|
||||
if isa(data, DataFrame)
|
||||
log_trace("Received DataFrame '$dataname' of type $data_type")
|
||||
log_trace(" Dimensions: $(size(data, 1)) rows x $(size(data, 2)) columns")
|
||||
log_trace(" Column names: $(names(data))")
|
||||
|
||||
# Display first few rows
|
||||
println(" First 5 rows:")
|
||||
display(data[1:min(5, size(data, 1)), :])
|
||||
|
||||
# Save to file
|
||||
output_path = "./received_$dataname.arrow"
|
||||
io = IOBuffer()
|
||||
Arrow.write(io, data)
|
||||
write(output_path, take!(io))
|
||||
log_trace("Saved DataFrame to $output_path")
|
||||
else
|
||||
log_trace("Received unexpected data type for '$dataname': $(typeof(data))")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Keep listening for 10 seconds
|
||||
sleep(120)
|
||||
NATS.drain(conn)
|
||||
end
|
||||
|
||||
|
||||
# Run the test
|
||||
println("Starting DataFrame table transport test...")
|
||||
println("Note: This receiver will wait for messages from the sender.")
|
||||
println("Run test_julia_to_julia_table_sender.jl first to send test data.")
|
||||
|
||||
# Run receiver
|
||||
println("testing smartreceive")
|
||||
test_table_receive()
|
||||
|
||||
println("Test completed.")
|
||||
135
test/test_julia_table_sender.jl
Normal file
135
test/test_julia_table_sender.jl
Normal file
@@ -0,0 +1,135 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for DataFrame table transport testing
|
||||
# Tests sending 1 large and 1 small DataFrames via direct and link transport
|
||||
# Uses NATSBridge.jl smartsend with "table" type
|
||||
|
||||
using NATS, JSON, UUIDs, Dates, PrettyPrinting, DataFrames, Arrow, HTTP
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/NATSBridge_table_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
# Create correlation ID for tracing
|
||||
correlation_id = string(uuid4())
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test table transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] [Correlation: $correlation_id] $message")
|
||||
end
|
||||
|
||||
|
||||
# File upload handler for plik server
|
||||
function plik_upload_handler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
# Get upload ID
|
||||
url_getUploadID = "$fileserver_url/upload"
|
||||
headers = ["Content-Type" => "application/json"]
|
||||
body = """{ "OneShot" : true }"""
|
||||
httpResponse = HTTP.request("POST", url_getUploadID, headers, body; body_is_form=false)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
|
||||
# Upload file
|
||||
file_multipart = HTTP.Multipart(dataname, IOBuffer(data), "application/octet-stream")
|
||||
url_upload = "$fileserver_url/file/$uploadid"
|
||||
headers = ["X-UploadToken" => uploadtoken]
|
||||
|
||||
form = HTTP.Form(Dict("file" => file_multipart))
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
|
||||
fileid = responseJson["id"]
|
||||
url = "$fileserver_url/file/$uploadid/$fileid/$dataname"
|
||||
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
|
||||
# Sender: Send DataFrame tables via smartsend
|
||||
function test_table_send()
|
||||
# Create a small DataFrame (will use direct transport)
|
||||
small_df = DataFrame(
|
||||
id = 1:10,
|
||||
name = ["Alice", "Bob", "Charlie", "Diana", "Eve", "Frank", "Grace", "Henry", "Ivy", "Jack"],
|
||||
score = [95, 88, 92, 85, 90, 78, 95, 88, 92, 85],
|
||||
category = ["A", "B", "A", "B", "A", "B", "A", "B", "A", "B"]
|
||||
)
|
||||
|
||||
# Create a large DataFrame (will use link transport if > 1MB)
|
||||
# Generate a larger dataset (~2MB to ensure link transport)
|
||||
large_ids = 1:50000
|
||||
large_names = ["User_$i" for i in 1:50000]
|
||||
large_scores = rand(1:100, 50000)
|
||||
large_categories = ["Category_$(rand(1:10))" for i in 1:50000]
|
||||
|
||||
large_df = DataFrame(
|
||||
id = large_ids,
|
||||
name = large_names,
|
||||
score = large_scores,
|
||||
category = large_categories
|
||||
)
|
||||
|
||||
# Test data 1: small DataFrame
|
||||
data1 = ("small_table", small_df, "table")
|
||||
|
||||
# Test data 2: large DataFrame
|
||||
data2 = ("large_table", large_df, "table")
|
||||
|
||||
# Use smartsend with table type
|
||||
# For small DataFrame: will use direct transport (Base64 encoded Arrow IPC)
|
||||
# For large DataFrame: will use link transport (uploaded to fileserver)
|
||||
env, env_json_str = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2]; # List of (dataname, data, type) tuples
|
||||
broker_url = NATS_URL,
|
||||
fileserver_url = FILESERVER_URL,
|
||||
fileserver_upload_handler = plik_upload_handler,
|
||||
size_threshold = 1_000_000, # 1MB threshold
|
||||
correlation_id = correlation_id,
|
||||
msg_purpose = "chat",
|
||||
sender_name = "table_sender",
|
||||
receiver_name = "",
|
||||
receiver_id = "",
|
||||
reply_to = "",
|
||||
reply_to_msg_id = "",
|
||||
is_publish = true # Publish the message to NATS
|
||||
)
|
||||
|
||||
log_trace("Sent message with $(length(env.payloads)) payloads")
|
||||
|
||||
# Log transport type for each payload
|
||||
for (i, payload) in enumerate(env.payloads)
|
||||
log_trace("Payload $i ('$payload.dataname'):")
|
||||
log_trace(" Transport: $(payload.transport)")
|
||||
log_trace(" Type: $(payload.payload_type)")
|
||||
log_trace(" Size: $(payload.size) bytes")
|
||||
log_trace(" Encoding: $(payload.encoding)")
|
||||
|
||||
if payload.transport == "link"
|
||||
log_trace(" URL: $(payload.data)")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
|
||||
# Run the test
|
||||
println("Starting DataFrame table transport test...")
|
||||
println("Correlation ID: $correlation_id")
|
||||
|
||||
# Run sender
|
||||
println("start smartsend for tables")
|
||||
test_table_send()
|
||||
|
||||
println("Test completed.")
|
||||
83
test/test_julia_text_receiver.jl
Normal file
83
test/test_julia_text_receiver.jl
Normal file
@@ -0,0 +1,83 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for text transport testing
|
||||
# Tests receiving 1 large and 1 small text from Julia serviceA to Julia serviceB
|
||||
# Uses NATSBridge.jl smartreceive with "text" type
|
||||
|
||||
using NATS, JSON, UUIDs, Dates, PrettyPrinting, DataFrames, Arrow, HTTP
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/NATSBridge_text_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test text transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] $message")
|
||||
end
|
||||
|
||||
|
||||
# Receiver: Listen for messages and verify text handling
|
||||
function test_text_receive()
|
||||
conn = NATS.connect(NATS_URL)
|
||||
NATS.subscribe(conn, SUBJECT) do msg
|
||||
log_trace("Received message on $(msg.subject)")
|
||||
|
||||
# Use NATSBridge.smartreceive to handle the data
|
||||
# API: smartreceive(msg, download_handler; max_retries, base_delay, max_delay)
|
||||
result = NATSBridge.smartreceive(
|
||||
msg;
|
||||
max_retries = 5,
|
||||
base_delay = 100,
|
||||
max_delay = 5000
|
||||
)
|
||||
|
||||
# Result is an envelope dictionary with payloads field containing list of (dataname, data, data_type) tuples
|
||||
for (dataname, data, data_type) in result["payloads"]
|
||||
if isa(data, String)
|
||||
log_trace("Received text '$dataname' of type $data_type")
|
||||
log_trace(" Length: $(length(data)) characters")
|
||||
|
||||
# Display first 100 characters
|
||||
if length(data) > 100
|
||||
log_trace(" First 100 characters: $(data[1:100])...")
|
||||
else
|
||||
log_trace(" Content: $data")
|
||||
end
|
||||
|
||||
# Save to file
|
||||
output_path = "./received_$dataname.txt"
|
||||
write(output_path, data)
|
||||
log_trace("Saved text to $output_path")
|
||||
else
|
||||
log_trace("Received unexpected data type for '$dataname': $(typeof(data))")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
# Keep listening for 10 seconds
|
||||
sleep(120)
|
||||
NATS.drain(conn)
|
||||
end
|
||||
|
||||
|
||||
# Run the test
|
||||
println("Starting text transport test...")
|
||||
println("Note: This receiver will wait for messages from the sender.")
|
||||
println("Run test_julia_to_julia_text_sender.jl first to send test data.")
|
||||
|
||||
# Run receiver
|
||||
println("testing smartreceive for text")
|
||||
test_text_receive()
|
||||
|
||||
println("Test completed.")
|
||||
120
test/test_julia_text_sender.jl
Normal file
120
test/test_julia_text_sender.jl
Normal file
@@ -0,0 +1,120 @@
|
||||
#!/usr/bin/env julia
|
||||
# Test script for text transport testing
|
||||
# Tests sending 1 large and 1 small text from Julia serviceA to Julia serviceB
|
||||
# Uses NATSBridge.jl smartsend with "text" type
|
||||
|
||||
using NATS, JSON, UUIDs, Dates, PrettyPrinting, DataFrames, Arrow, HTTP
|
||||
|
||||
# Include the bridge module
|
||||
include("../src/NATSBridge.jl")
|
||||
using .NATSBridge
|
||||
|
||||
# Configuration
|
||||
const SUBJECT = "/NATSBridge_text_test"
|
||||
const NATS_URL = "nats.yiem.cc"
|
||||
const FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
|
||||
# Create correlation ID for tracing
|
||||
correlation_id = string(uuid4())
|
||||
|
||||
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
# test text transfer #
|
||||
# ------------------------------------------------------------------------------------------------ #
|
||||
|
||||
|
||||
# Helper: Log with correlation ID
|
||||
function log_trace(message)
|
||||
timestamp = Dates.now()
|
||||
println("[$timestamp] [Correlation: $correlation_id] $message")
|
||||
end
|
||||
|
||||
|
||||
# File upload handler for plik server
|
||||
function plik_upload_handler(fileserver_url::String, dataname::String, data::Vector{UInt8})::Dict{String, Any}
|
||||
# Get upload ID
|
||||
url_getUploadID = "$fileserver_url/upload"
|
||||
headers = ["Content-Type" => "application/json"]
|
||||
body = """{ "OneShot" : true }"""
|
||||
httpResponse = HTTP.request("POST", url_getUploadID, headers, body; body_is_form=false)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
uploadid = responseJson["id"]
|
||||
uploadtoken = responseJson["uploadToken"]
|
||||
|
||||
# Upload file
|
||||
file_multipart = HTTP.Multipart(dataname, IOBuffer(data), "application/octet-stream")
|
||||
url_upload = "$fileserver_url/file/$uploadid"
|
||||
headers = ["X-UploadToken" => uploadtoken]
|
||||
|
||||
form = HTTP.Form(Dict("file" => file_multipart))
|
||||
httpResponse = HTTP.post(url_upload, headers, form)
|
||||
responseJson = JSON.parse(String(httpResponse.body))
|
||||
|
||||
fileid = responseJson["id"]
|
||||
url = "$fileserver_url/file/$uploadid/$fileid/$dataname"
|
||||
|
||||
return Dict("status" => httpResponse.status, "uploadid" => uploadid, "fileid" => fileid, "url" => url)
|
||||
end
|
||||
|
||||
|
||||
# Sender: Send text via smartsend
|
||||
function test_text_send()
|
||||
# Create a small text (will use direct transport)
|
||||
small_text = "Hello, this is a small text message. Testing direct transport via NATS."
|
||||
|
||||
# Create a large text (will use link transport if > 1MB)
|
||||
# Generate a larger text (~2MB to ensure link transport)
|
||||
large_text = join(["Line $i: This is a sample text line with some content to pad the size. " for i in 1:50000], "")
|
||||
|
||||
# Test data 1: small text
|
||||
data1 = ("small_text", small_text, "text")
|
||||
|
||||
# Test data 2: large text
|
||||
data2 = ("large_text", large_text, "text")
|
||||
|
||||
# Use smartsend with text type
|
||||
# For small text: will use direct transport (Base64 encoded UTF-8)
|
||||
# For large text: will use link transport (uploaded to fileserver)
|
||||
env, env_json_str = NATSBridge.smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2]; # List of (dataname, data, type) tuples
|
||||
broker_url = NATS_URL,
|
||||
fileserver_url = FILESERVER_URL,
|
||||
fileserver_upload_handler = plik_upload_handler,
|
||||
size_threshold = 1_000_000, # 1MB threshold
|
||||
correlation_id = correlation_id,
|
||||
msg_purpose = "chat",
|
||||
sender_name = "text_sender",
|
||||
receiver_name = "",
|
||||
receiver_id = "",
|
||||
reply_to = "",
|
||||
reply_to_msg_id = "",
|
||||
is_publish = true # Publish the message to NATS
|
||||
)
|
||||
|
||||
log_trace("Sent message with $(length(env.payloads)) payloads")
|
||||
|
||||
# Log transport type for each payload
|
||||
for (i, payload) in enumerate(env.payloads)
|
||||
log_trace("Payload $i ('$payload.dataname'):")
|
||||
log_trace(" Transport: $(payload.transport)")
|
||||
log_trace(" Type: $(payload.payload_type)")
|
||||
log_trace(" Size: $(payload.size) bytes")
|
||||
log_trace(" Encoding: $(payload.encoding)")
|
||||
|
||||
if payload.transport == "link"
|
||||
log_trace(" URL: $(payload.data)")
|
||||
end
|
||||
end
|
||||
end
|
||||
|
||||
|
||||
# Run the test
|
||||
println("Starting text transport test...")
|
||||
println("Correlation ID: $correlation_id")
|
||||
|
||||
# Run sender
|
||||
println("start smartsend for text")
|
||||
test_text_send()
|
||||
|
||||
println("Test completed.")
|
||||
207
test/test_micropython_basic.py
Normal file
207
test/test_micropython_basic.py
Normal file
@@ -0,0 +1,207 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Basic functionality test for nats_bridge.py
|
||||
Tests the core classes and functions without NATS connection
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add src to path for import
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'src'))
|
||||
|
||||
from nats_bridge import (
|
||||
MessagePayload,
|
||||
MessageEnvelope,
|
||||
smartsend,
|
||||
smartreceive,
|
||||
log_trace,
|
||||
generate_uuid,
|
||||
get_timestamp,
|
||||
_serialize_data,
|
||||
_deserialize_data
|
||||
)
|
||||
import json
|
||||
|
||||
|
||||
def test_message_payload():
|
||||
"""Test MessagePayload class"""
|
||||
print("\n=== Testing MessagePayload ===")
|
||||
|
||||
# Test direct transport with text
|
||||
payload1 = MessagePayload(
|
||||
data="Hello World",
|
||||
msg_type="text",
|
||||
id="test-id-1",
|
||||
dataname="message",
|
||||
transport="direct",
|
||||
encoding="base64",
|
||||
size=11
|
||||
)
|
||||
|
||||
assert payload1.id == "test-id-1"
|
||||
assert payload1.dataname == "message"
|
||||
assert payload1.type == "text"
|
||||
assert payload1.transport == "direct"
|
||||
assert payload1.encoding == "base64"
|
||||
assert payload1.size == 11
|
||||
print(" [PASS] MessagePayload with text data")
|
||||
|
||||
# Test link transport with URL
|
||||
payload2 = MessagePayload(
|
||||
data="http://example.com/file.txt",
|
||||
msg_type="binary",
|
||||
id="test-id-2",
|
||||
dataname="file",
|
||||
transport="link",
|
||||
encoding="none",
|
||||
size=1000
|
||||
)
|
||||
|
||||
assert payload2.transport == "link"
|
||||
assert payload2.data == "http://example.com/file.txt"
|
||||
print(" [PASS] MessagePayload with link transport")
|
||||
|
||||
# Test to_dict method
|
||||
payload_dict = payload1.to_dict()
|
||||
assert "id" in payload_dict
|
||||
assert "dataname" in payload_dict
|
||||
assert "type" in payload_dict
|
||||
assert "transport" in payload_dict
|
||||
assert "data" in payload_dict
|
||||
print(" [PASS] MessagePayload.to_dict() method")
|
||||
|
||||
|
||||
def test_message_envelope():
|
||||
"""Test MessageEnvelope class"""
|
||||
print("\n=== Testing MessageEnvelope ===")
|
||||
|
||||
# Create payloads
|
||||
payload1 = MessagePayload("Hello", "text", id="p1", dataname="msg1")
|
||||
payload2 = MessagePayload("http://example.com/file", "binary", id="p2", dataname="file", transport="link")
|
||||
|
||||
# Create envelope
|
||||
env = MessageEnvelope(
|
||||
send_to="/test/subject",
|
||||
payloads=[payload1, payload2],
|
||||
correlation_id="test-correlation-id",
|
||||
msg_id="test-msg-id",
|
||||
msg_purpose="chat",
|
||||
sender_name="test_sender",
|
||||
receiver_name="test_receiver",
|
||||
reply_to="/test/reply"
|
||||
)
|
||||
|
||||
assert env.send_to == "/test/subject"
|
||||
assert env.correlation_id == "test-correlation-id"
|
||||
assert env.msg_id == "test-msg-id"
|
||||
assert env.msg_purpose == "chat"
|
||||
assert len(env.payloads) == 2
|
||||
print(" [PASS] MessageEnvelope creation")
|
||||
|
||||
# Test to_json method
|
||||
json_str = env.to_json()
|
||||
json_data = json.loads(json_str)
|
||||
assert json_data["sendTo"] == "/test/subject"
|
||||
assert json_data["correlationId"] == "test-correlation-id"
|
||||
assert json_data["msgPurpose"] == "chat"
|
||||
assert len(json_data["payloads"]) == 2
|
||||
print(" [PASS] MessageEnvelope.to_json() method")
|
||||
|
||||
|
||||
def test_serialize_data():
|
||||
"""Test _serialize_data function"""
|
||||
print("\n=== Testing _serialize_data ===")
|
||||
|
||||
# Test text serialization
|
||||
text_bytes = _serialize_data("Hello", "text")
|
||||
assert isinstance(text_bytes, bytes)
|
||||
assert text_bytes == b"Hello"
|
||||
print(" [PASS] Text serialization")
|
||||
|
||||
# Test dictionary serialization
|
||||
dict_data = {"key": "value", "number": 42}
|
||||
dict_bytes = _serialize_data(dict_data, "dictionary")
|
||||
assert isinstance(dict_bytes, bytes)
|
||||
parsed = json.loads(dict_bytes.decode('utf-8'))
|
||||
assert parsed["key"] == "value"
|
||||
print(" [PASS] Dictionary serialization")
|
||||
|
||||
# Test binary serialization
|
||||
binary_data = b"\x00\x01\x02"
|
||||
binary_bytes = _serialize_data(binary_data, "binary")
|
||||
assert binary_bytes == b"\x00\x01\x02"
|
||||
print(" [PASS] Binary serialization")
|
||||
|
||||
# Test image serialization
|
||||
image_data = bytes([1, 2, 3, 4, 5])
|
||||
image_bytes = _serialize_data(image_data, "image")
|
||||
assert image_bytes == image_data
|
||||
print(" [PASS] Image serialization")
|
||||
|
||||
|
||||
def test_deserialize_data():
|
||||
"""Test _deserialize_data function"""
|
||||
print("\n=== Testing _deserialize_data ===")
|
||||
|
||||
# Test text deserialization
|
||||
text_bytes = b"Hello"
|
||||
text_data = _deserialize_data(text_bytes, "text", "test-correlation-id")
|
||||
assert text_data == "Hello"
|
||||
print(" [PASS] Text deserialization")
|
||||
|
||||
# Test dictionary deserialization
|
||||
dict_bytes = b'{"key": "value"}'
|
||||
dict_data = _deserialize_data(dict_bytes, "dictionary", "test-correlation-id")
|
||||
assert dict_data == {"key": "value"}
|
||||
print(" [PASS] Dictionary deserialization")
|
||||
|
||||
# Test binary deserialization
|
||||
binary_data = b"\x00\x01\x02"
|
||||
binary_result = _deserialize_data(binary_data, "binary", "test-correlation-id")
|
||||
assert binary_result == b"\x00\x01\x02"
|
||||
print(" [PASS] Binary deserialization")
|
||||
|
||||
|
||||
def test_utilities():
|
||||
"""Test utility functions"""
|
||||
print("\n=== Testing Utility Functions ===")
|
||||
|
||||
# Test generate_uuid
|
||||
uuid1 = generate_uuid()
|
||||
uuid2 = generate_uuid()
|
||||
assert uuid1 != uuid2
|
||||
print(f" [PASS] generate_uuid() - generated: {uuid1}")
|
||||
|
||||
# Test get_timestamp
|
||||
timestamp = get_timestamp()
|
||||
assert "T" in timestamp
|
||||
print(f" [PASS] get_timestamp() - generated: {timestamp}")
|
||||
|
||||
|
||||
def main():
|
||||
"""Run all tests"""
|
||||
print("=" * 60)
|
||||
print("NATSBridge Python/Micropython - Basic Functionality Tests")
|
||||
print("=" * 60)
|
||||
|
||||
try:
|
||||
test_message_payload()
|
||||
test_message_envelope()
|
||||
test_serialize_data()
|
||||
test_deserialize_data()
|
||||
test_utilities()
|
||||
|
||||
print("\n" + "=" * 60)
|
||||
print("ALL TESTS PASSED!")
|
||||
print("=" * 60)
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n[FAIL] Test failed with error: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
70
test/test_micropython_dict_receiver.py
Normal file
70
test/test_micropython_dict_receiver.py
Normal file
@@ -0,0 +1,70 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for dictionary transport testing - Receiver
|
||||
Tests receiving dictionary messages via NATS using nats_bridge.py smartreceive
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
|
||||
# Add src to path for import
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'src'))
|
||||
|
||||
from nats_bridge import smartreceive, log_trace
|
||||
import nats
|
||||
import asyncio
|
||||
|
||||
# Configuration
|
||||
SUBJECT = "/NATSBridge_dict_test"
|
||||
NATS_URL = "nats://nats.yiem.cc:4222"
|
||||
|
||||
|
||||
async def main():
|
||||
log_trace("", f"Starting dictionary transport receiver test...")
|
||||
log_trace("", f"Note: This receiver will wait for messages from the sender.")
|
||||
log_trace("", f"Run test_micropython_dict_sender.py first to send test data.")
|
||||
|
||||
# Connect to NATS
|
||||
nc = await nats.connect(NATS_URL)
|
||||
log_trace("", f"Connected to NATS at {NATS_URL}")
|
||||
|
||||
# Subscribe to the subject
|
||||
async def message_handler(msg):
|
||||
log_trace("", f"Received message on {msg.subject}")
|
||||
|
||||
# Use smartreceive to handle the data
|
||||
result = smartreceive(msg.data)
|
||||
|
||||
# Result is an envelope dictionary with payloads field containing list of (dataname, data, data_type) tuples
|
||||
for dataname, data, data_type in result["payloads"]:
|
||||
if isinstance(data, dict):
|
||||
log_trace(result.get("correlationId", ""), f"Received dictionary '{dataname}' of type {data_type}")
|
||||
log_trace(result.get("correlationId", ""), f" Keys: {list(data.keys())}")
|
||||
|
||||
# Display first few items for small dicts
|
||||
if isinstance(data, dict) and len(data) <= 10:
|
||||
log_trace(result.get("correlationId", ""), f" Content: {json.dumps(data, indent=2)}")
|
||||
else:
|
||||
# For large dicts, show summary
|
||||
log_trace(result.get("correlationId", ""), f" Summary: {json.dumps(data, default=str)[:200]}...")
|
||||
|
||||
# Save to file
|
||||
output_path = f"./received_{dataname}.json"
|
||||
with open(output_path, 'w') as f:
|
||||
json.dump(data, f, indent=2)
|
||||
log_trace(result.get("correlationId", ""), f"Saved dictionary to {output_path}")
|
||||
else:
|
||||
log_trace(result.get("correlationId", ""), f"Received unexpected data type for '{dataname}': {type(data)}")
|
||||
|
||||
sid = await nc.subscribe(SUBJECT, cb=message_handler)
|
||||
log_trace("", f"Subscribed to {SUBJECT} with subscription ID: {sid}")
|
||||
|
||||
# Keep listening for 120 seconds
|
||||
await asyncio.sleep(120)
|
||||
await nc.close()
|
||||
log_trace("", "Test completed.")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
100
test/test_micropython_dict_sender.py
Normal file
100
test/test_micropython_dict_sender.py
Normal file
@@ -0,0 +1,100 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for dictionary transport testing - Micropython
|
||||
Tests sending dictionary messages via NATS using nats_bridge.py smartsend
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add src to path for import
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'src'))
|
||||
|
||||
from nats_bridge import smartsend, log_trace
|
||||
import uuid
|
||||
|
||||
# Configuration
|
||||
SUBJECT = "/NATSBridge_dict_test"
|
||||
NATS_URL = "nats://nats.yiem.cc:4222"
|
||||
FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
SIZE_THRESHOLD = 1_000_000 # 1MB
|
||||
|
||||
# Create correlation ID for tracing
|
||||
correlation_id = str(uuid.uuid4())
|
||||
|
||||
|
||||
def main():
|
||||
# Create a small dictionary (will use direct transport)
|
||||
small_dict = {
|
||||
"name": "test",
|
||||
"value": 42,
|
||||
"enabled": True,
|
||||
"metadata": {
|
||||
"version": "1.0.0",
|
||||
"timestamp": "2026-02-22T12:00:00Z"
|
||||
}
|
||||
}
|
||||
|
||||
# Create a large dictionary (will use link transport if > 1MB)
|
||||
# Generate a larger dictionary (~2MB to ensure link transport)
|
||||
large_dict = {
|
||||
"id": str(uuid.uuid4()),
|
||||
"items": [
|
||||
{
|
||||
"index": i,
|
||||
"name": f"item_{i}",
|
||||
"value": i * 1.5,
|
||||
"data": "x" * 10000 # Large string per item
|
||||
}
|
||||
for i in range(200)
|
||||
],
|
||||
"metadata": {
|
||||
"count": 200,
|
||||
"created": "2026-02-22T12:00:00Z"
|
||||
}
|
||||
}
|
||||
|
||||
# Test data 1: small dictionary
|
||||
data1 = ("small_dict", small_dict, "dictionary")
|
||||
|
||||
# Test data 2: large dictionary
|
||||
data2 = ("large_dict", large_dict, "dictionary")
|
||||
|
||||
log_trace(correlation_id, f"Starting smartsend for subject: {SUBJECT}")
|
||||
log_trace(correlation_id, f"Correlation ID: {correlation_id}")
|
||||
|
||||
# Use smartsend with dictionary type
|
||||
env, env_json_str = smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2], # List of (dataname, data, type) tuples
|
||||
nats_url=NATS_URL,
|
||||
fileserver_url=FILESERVER_URL,
|
||||
size_threshold=SIZE_THRESHOLD,
|
||||
correlation_id=correlation_id,
|
||||
msg_purpose="chat",
|
||||
sender_name="dict_sender",
|
||||
receiver_name="",
|
||||
receiver_id="",
|
||||
reply_to="",
|
||||
reply_to_msg_id="",
|
||||
is_publish=True # Publish the message to NATS
|
||||
)
|
||||
|
||||
log_trace(correlation_id, f"Sent message with {len(env.payloads)} payloads")
|
||||
|
||||
# Log transport type for each payload
|
||||
for i, payload in enumerate(env.payloads):
|
||||
log_trace(correlation_id, f"Payload {i+1} ('{payload.dataname}'):")
|
||||
log_trace(correlation_id, f" Transport: {payload.transport}")
|
||||
log_trace(correlation_id, f" Type: {payload.type}")
|
||||
log_trace(correlation_id, f" Size: {payload.size} bytes")
|
||||
log_trace(correlation_id, f" Encoding: {payload.encoding}")
|
||||
|
||||
if payload.transport == "link":
|
||||
log_trace(correlation_id, f" URL: {payload.data}")
|
||||
|
||||
print(f"Test completed. Correlation ID: {correlation_id}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
65
test/test_micropython_file_receiver.py
Normal file
65
test/test_micropython_file_receiver.py
Normal file
@@ -0,0 +1,65 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for file transport testing - Receiver
|
||||
Tests receiving binary files via NATS using nats_bridge.py smartreceive
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add src to path for import
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'src'))
|
||||
|
||||
from nats_bridge import smartreceive, log_trace
|
||||
import nats
|
||||
import asyncio
|
||||
|
||||
# Configuration
|
||||
SUBJECT = "/NATSBridge_file_test"
|
||||
NATS_URL = "nats://nats.yiem.cc:4222"
|
||||
|
||||
|
||||
async def main():
|
||||
log_trace("", f"Starting file transport receiver test...")
|
||||
log_trace("", f"Note: This receiver will wait for messages from the sender.")
|
||||
log_trace("", f"Run test_micropython_file_sender.py first to send test data.")
|
||||
|
||||
# Connect to NATS
|
||||
nc = await nats.connect(NATS_URL)
|
||||
log_trace("", f"Connected to NATS at {NATS_URL}")
|
||||
|
||||
# Subscribe to the subject
|
||||
async def message_handler(msg):
|
||||
log_trace("", f"Received message on {msg.subject}")
|
||||
|
||||
# Use smartreceive to handle the data
|
||||
result = smartreceive(msg.data)
|
||||
|
||||
# Result is an envelope dictionary with payloads field containing list of (dataname, data, data_type) tuples
|
||||
for dataname, data, data_type in result["payloads"]:
|
||||
if isinstance(data, bytes):
|
||||
log_trace(result.get("correlationId", ""), f"Received binary '{dataname}' of type {data_type}")
|
||||
log_trace(result.get("correlationId", ""), f" Size: {len(data)} bytes")
|
||||
|
||||
# Display first 100 bytes as hex
|
||||
log_trace(result.get("correlationId", ""), f" First 100 bytes (hex): {data[:100].hex()}")
|
||||
|
||||
# Save to file
|
||||
output_path = f"./received_{dataname}.bin"
|
||||
with open(output_path, 'wb') as f:
|
||||
f.write(data)
|
||||
log_trace(result.get("correlationId", ""), f"Saved binary to {output_path}")
|
||||
else:
|
||||
log_trace(result.get("correlationId", ""), f"Received unexpected data type for '{dataname}': {type(data)}")
|
||||
|
||||
sid = await nc.subscribe(SUBJECT, cb=message_handler)
|
||||
log_trace("", f"Subscribed to {SUBJECT} with subscription ID: {sid}")
|
||||
|
||||
# Keep listening for 120 seconds
|
||||
await asyncio.sleep(120)
|
||||
await nc.close()
|
||||
log_trace("", "Test completed.")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
80
test/test_micropython_file_sender.py
Normal file
80
test/test_micropython_file_sender.py
Normal file
@@ -0,0 +1,80 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for file transport testing - Micropython
|
||||
Tests sending binary files via NATS using nats_bridge.py smartsend
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add src to path for import
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'src'))
|
||||
|
||||
from nats_bridge import smartsend, log_trace
|
||||
import uuid
|
||||
|
||||
# Configuration
|
||||
SUBJECT = "/NATSBridge_file_test"
|
||||
NATS_URL = "nats://nats.yiem.cc:4222"
|
||||
FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
SIZE_THRESHOLD = 1_000_000 # 1MB
|
||||
|
||||
# Create correlation ID for tracing
|
||||
correlation_id = str(uuid.uuid4())
|
||||
|
||||
|
||||
def main():
|
||||
# Create small binary data (will use direct transport)
|
||||
small_binary = b"This is small binary data for testing direct transport."
|
||||
small_binary += b"\x00" * 100 # Add some null bytes
|
||||
|
||||
# Create large binary data (will use link transport if > 1MB)
|
||||
# Generate a larger binary (~2MB to ensure link transport)
|
||||
large_binary = bytes([
|
||||
(i * 7) % 256 for i in range(2_000_000)
|
||||
])
|
||||
|
||||
# Test data 1: small binary (direct transport)
|
||||
data1 = ("small_binary", small_binary, "binary")
|
||||
|
||||
# Test data 2: large binary (link transport)
|
||||
data2 = ("large_binary", large_binary, "binary")
|
||||
|
||||
log_trace(correlation_id, f"Starting smartsend for subject: {SUBJECT}")
|
||||
log_trace(correlation_id, f"Correlation ID: {correlation_id}")
|
||||
|
||||
# Use smartsend with binary type
|
||||
env, env_json_str = smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2], # List of (dataname, data, type) tuples
|
||||
nats_url=NATS_URL,
|
||||
fileserver_url=FILESERVER_URL,
|
||||
size_threshold=SIZE_THRESHOLD,
|
||||
correlation_id=correlation_id,
|
||||
msg_purpose="chat",
|
||||
sender_name="file_sender",
|
||||
receiver_name="",
|
||||
receiver_id="",
|
||||
reply_to="",
|
||||
reply_to_msg_id="",
|
||||
is_publish=True # Publish the message to NATS
|
||||
)
|
||||
|
||||
log_trace(correlation_id, f"Sent message with {len(env.payloads)} payloads")
|
||||
|
||||
# Log transport type for each payload
|
||||
for i, payload in enumerate(env.payloads):
|
||||
log_trace(correlation_id, f"Payload {i+1} ('{payload.dataname}'):")
|
||||
log_trace(correlation_id, f" Transport: {payload.transport}")
|
||||
log_trace(correlation_id, f" Type: {payload.type}")
|
||||
log_trace(correlation_id, f" Size: {payload.size} bytes")
|
||||
log_trace(correlation_id, f" Encoding: {payload.encoding}")
|
||||
|
||||
if payload.transport == "link":
|
||||
log_trace(correlation_id, f" URL: {payload.data}")
|
||||
|
||||
print(f"Test completed. Correlation ID: {correlation_id}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
97
test/test_micropython_mixed_receiver.py
Normal file
97
test/test_micropython_mixed_receiver.py
Normal file
@@ -0,0 +1,97 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for mixed payload testing - Receiver
|
||||
Tests receiving mixed payload types via NATS using nats_bridge.py smartreceive
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
|
||||
# Add src to path for import
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'src'))
|
||||
|
||||
from nats_bridge import smartreceive, log_trace
|
||||
import nats
|
||||
import asyncio
|
||||
|
||||
# Configuration
|
||||
SUBJECT = "/NATSBridge_mixed_test"
|
||||
NATS_URL = "nats://nats.yiem.cc:4222"
|
||||
|
||||
|
||||
async def main():
|
||||
log_trace("", f"Starting mixed payload receiver test...")
|
||||
log_trace("", f"Note: This receiver will wait for messages from the sender.")
|
||||
log_trace("", f"Run test_micropython_mixed_sender.py first to send test data.")
|
||||
|
||||
# Connect to NATS
|
||||
nc = await nats.connect(NATS_URL)
|
||||
log_trace("", f"Connected to NATS at {NATS_URL}")
|
||||
|
||||
# Subscribe to the subject
|
||||
async def message_handler(msg):
|
||||
log_trace("", f"Received message on {msg.subject}")
|
||||
|
||||
# Use smartreceive to handle the data
|
||||
result = smartreceive(msg.data)
|
||||
|
||||
log_trace(result.get("correlationId", ""), f"Received envelope with {len(result['payloads'])} payloads")
|
||||
|
||||
# Result is an envelope dictionary with payloads field containing list of (dataname, data, data_type) tuples
|
||||
for dataname, data, data_type in result["payloads"]:
|
||||
log_trace(result.get("correlationId", ""), f"\n--- Payload: {dataname} (type: {data_type}) ---")
|
||||
|
||||
if isinstance(data, str):
|
||||
log_trace(result.get("correlationId", ""), f" Type: text/string")
|
||||
log_trace(result.get("correlationId", ""), f" Length: {len(data)} characters")
|
||||
if len(data) <= 100:
|
||||
log_trace(result.get("correlationId", ""), f" Content: {data}")
|
||||
else:
|
||||
log_trace(result.get("correlationId", ""), f" First 100 chars: {data[:100]}...")
|
||||
# Save to file
|
||||
output_path = f"./received_{dataname}.txt"
|
||||
with open(output_path, 'w') as f:
|
||||
f.write(data)
|
||||
log_trace(result.get("correlationId", ""), f" Saved to: {output_path}")
|
||||
|
||||
elif isinstance(data, dict):
|
||||
log_trace(result.get("correlationId", ""), f" Type: dictionary")
|
||||
log_trace(result.get("correlationId", ""), f" Keys: {list(data.keys())}")
|
||||
log_trace(result.get("correlationId", ""), f" Content: {json.dumps(data, indent=2)}")
|
||||
# Save to file
|
||||
output_path = f"./received_{dataname}.json"
|
||||
with open(output_path, 'w') as f:
|
||||
json.dump(data, f, indent=2)
|
||||
log_trace(result.get("correlationId", ""), f" Saved to: {output_path}")
|
||||
|
||||
elif isinstance(data, bytes):
|
||||
log_trace(result.get("correlationId", ""), f" Type: binary")
|
||||
log_trace(result.get("correlationId", ""), f" Size: {len(data)} bytes")
|
||||
log_trace(result.get("correlationId", ""), f" First 100 bytes (hex): {data[:100].hex()}")
|
||||
# Save to file
|
||||
output_path = f"./received_{dataname}.bin"
|
||||
with open(output_path, 'wb') as f:
|
||||
f.write(data)
|
||||
log_trace(result.get("correlationId", ""), f" Saved to: {output_path}")
|
||||
else:
|
||||
log_trace(result.get("correlationId", ""), f" Received unexpected data type: {type(data)}")
|
||||
|
||||
# Log envelope metadata
|
||||
log_trace(result.get("correlationId", ""), f"\n--- Envelope Metadata ---")
|
||||
log_trace(result.get("correlationId", ""), f" Correlation ID: {result.get('correlationId', 'N/A')}")
|
||||
log_trace(result.get("correlationId", ""), f" Message ID: {result.get('msgId', 'N/A')}")
|
||||
log_trace(result.get("correlationId", ""), f" Sender: {result.get('senderName', 'N/A')}")
|
||||
log_trace(result.get("correlationId", ""), f" Purpose: {result.get('msgPurpose', 'N/A')}")
|
||||
|
||||
sid = await nc.subscribe(SUBJECT, cb=message_handler)
|
||||
log_trace("", f"Subscribed to {SUBJECT} with subscription ID: {sid}")
|
||||
|
||||
# Keep listening for 120 seconds
|
||||
await asyncio.sleep(120)
|
||||
await nc.close()
|
||||
log_trace("", "Test completed.")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
94
test/test_micropython_mixed_sender.py
Normal file
94
test/test_micropython_mixed_sender.py
Normal file
@@ -0,0 +1,94 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for mixed payload testing - Micropython
|
||||
Tests sending mixed payload types via NATS using nats_bridge.py smartsend
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add src to path for import
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'src'))
|
||||
|
||||
from nats_bridge import smartsend, log_trace
|
||||
import uuid
|
||||
|
||||
# Configuration
|
||||
SUBJECT = "/NATSBridge_mixed_test"
|
||||
NATS_URL = "nats://nats.yiem.cc:4222"
|
||||
FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
SIZE_THRESHOLD = 1_000_000 # 1MB
|
||||
|
||||
# Create correlation ID for tracing
|
||||
correlation_id = str(uuid.uuid4())
|
||||
|
||||
|
||||
def main():
|
||||
# Create payloads for mixed content test
|
||||
|
||||
# 1. Small text (direct transport)
|
||||
text_data = "Hello, this is a text message for testing mixed payloads!"
|
||||
|
||||
# 2. Small dictionary (direct transport)
|
||||
dict_data = {
|
||||
"status": "ok",
|
||||
"code": 200,
|
||||
"message": "Test successful",
|
||||
"items": [1, 2, 3]
|
||||
}
|
||||
|
||||
# 3. Small binary (direct transport)
|
||||
binary_data = b"\x00\x01\x02\x03\x04\x05" + b"\xff" * 100
|
||||
|
||||
# 4. Large text (link transport - will use fileserver)
|
||||
large_text = "\n".join([
|
||||
f"Line {i}: This is a large text payload for link transport testing. " * 50
|
||||
for i in range(100)
|
||||
])
|
||||
|
||||
# Test data list - mixed payload types
|
||||
data = [
|
||||
("message_text", text_data, "text"),
|
||||
("config_dict", dict_data, "dictionary"),
|
||||
("small_binary", binary_data, "binary"),
|
||||
("large_text", large_text, "text"),
|
||||
]
|
||||
|
||||
log_trace(correlation_id, f"Starting smartsend for subject: {SUBJECT}")
|
||||
log_trace(correlation_id, f"Correlation ID: {correlation_id}")
|
||||
|
||||
# Use smartsend with mixed types
|
||||
env, env_json_str = smartsend(
|
||||
SUBJECT,
|
||||
data, # List of (dataname, data, type) tuples
|
||||
nats_url=NATS_URL,
|
||||
fileserver_url=FILESERVER_URL,
|
||||
size_threshold=SIZE_THRESHOLD,
|
||||
correlation_id=correlation_id,
|
||||
msg_purpose="chat",
|
||||
sender_name="mixed_sender",
|
||||
receiver_name="",
|
||||
receiver_id="",
|
||||
reply_to="",
|
||||
reply_to_msg_id="",
|
||||
is_publish=True # Publish the message to NATS
|
||||
)
|
||||
|
||||
log_trace(correlation_id, f"Sent message with {len(env.payloads)} payloads")
|
||||
|
||||
# Log transport type for each payload
|
||||
for i, payload in enumerate(env.payloads):
|
||||
log_trace(correlation_id, f"Payload {i+1} ('{payload.dataname}'):")
|
||||
log_trace(correlation_id, f" Transport: {payload.transport}")
|
||||
log_trace(correlation_id, f" Type: {payload.type}")
|
||||
log_trace(correlation_id, f" Size: {payload.size} bytes")
|
||||
log_trace(correlation_id, f" Encoding: {payload.encoding}")
|
||||
|
||||
if payload.transport == "link":
|
||||
log_trace(correlation_id, f" URL: {payload.data}")
|
||||
|
||||
print(f"Test completed. Correlation ID: {correlation_id}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
69
test/test_micropython_text_receiver.py
Normal file
69
test/test_micropython_text_receiver.py
Normal file
@@ -0,0 +1,69 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for text transport testing - Receiver
|
||||
Tests receiving text messages via NATS using nats_bridge.py smartreceive
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
import json
|
||||
|
||||
# Add src to path for import
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'src'))
|
||||
|
||||
from nats_bridge import smartreceive, log_trace
|
||||
import nats
|
||||
import asyncio
|
||||
|
||||
# Configuration
|
||||
SUBJECT = "/NATSBridge_text_test"
|
||||
NATS_URL = "nats://nats.yiem.cc:4222"
|
||||
|
||||
|
||||
async def main():
|
||||
log_trace("", f"Starting text transport receiver test...")
|
||||
log_trace("", f"Note: This receiver will wait for messages from the sender.")
|
||||
log_trace("", f"Run test_micropython_text_sender.py first to send test data.")
|
||||
|
||||
# Connect to NATS
|
||||
nc = await nats.connect(NATS_URL)
|
||||
log_trace("", f"Connected to NATS at {NATS_URL}")
|
||||
|
||||
# Subscribe to the subject
|
||||
async def message_handler(msg):
|
||||
log_trace("", f"Received message on {msg.subject}")
|
||||
|
||||
# Use smartreceive to handle the data
|
||||
result = smartreceive(msg.data)
|
||||
|
||||
# Result is an envelope dictionary with payloads field containing list of (dataname, data, data_type) tuples
|
||||
for dataname, data, data_type in result["payloads"]:
|
||||
if isinstance(data, str):
|
||||
log_trace(result.get("correlationId", ""), f"Received text '{dataname}' of type {data_type}")
|
||||
log_trace(result.get("correlationId", ""), f" Length: {len(data)} characters")
|
||||
|
||||
# Display first 100 characters
|
||||
if len(data) > 100:
|
||||
log_trace(result.get("correlationId", ""), f" First 100 characters: {data[:100]}...")
|
||||
else:
|
||||
log_trace(result.get("correlationId", ""), f" Content: {data}")
|
||||
|
||||
# Save to file
|
||||
output_path = f"./received_{dataname}.txt"
|
||||
with open(output_path, 'w') as f:
|
||||
f.write(data)
|
||||
log_trace(result.get("correlationId", ""), f"Saved text to {output_path}")
|
||||
else:
|
||||
log_trace(result.get("correlationId", ""), f"Received unexpected data type for '{dataname}': {type(data)}")
|
||||
|
||||
sid = await nc.subscribe(SUBJECT, cb=message_handler)
|
||||
log_trace("", f"Subscribed to {SUBJECT} with subscription ID: {sid}")
|
||||
|
||||
# Keep listening for 120 seconds
|
||||
await asyncio.sleep(120)
|
||||
await nc.close()
|
||||
log_trace("", "Test completed.")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
82
test/test_micropython_text_sender.py
Normal file
82
test/test_micropython_text_sender.py
Normal file
@@ -0,0 +1,82 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test script for text transport testing - Micropython
|
||||
Tests sending text messages via NATS using nats_bridge.py smartsend
|
||||
"""
|
||||
|
||||
import sys
|
||||
import os
|
||||
|
||||
# Add src to path for import
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', 'src'))
|
||||
|
||||
from nats_bridge import smartsend, log_trace
|
||||
import uuid
|
||||
|
||||
# Configuration
|
||||
SUBJECT = "/NATSBridge_text_test"
|
||||
NATS_URL = "nats://nats.yiem.cc:4222"
|
||||
FILESERVER_URL = "http://192.168.88.104:8080"
|
||||
SIZE_THRESHOLD = 1_000_000 # 1MB
|
||||
|
||||
# Create correlation ID for tracing
|
||||
correlation_id = str(uuid.uuid4())
|
||||
|
||||
|
||||
def main():
|
||||
# Create a small text (will use direct transport)
|
||||
small_text = "Hello, this is a small text message. Testing direct transport via NATS."
|
||||
|
||||
# Create a large text (will use link transport if > 1MB)
|
||||
# Generate a larger text (~2MB to ensure link transport)
|
||||
large_text = "\n".join([
|
||||
f"Line {i}: This is a sample text line with some content to pad the size. " * 100
|
||||
for i in range(500)
|
||||
])
|
||||
|
||||
# Test data 1: small text
|
||||
data1 = ("small_text", small_text, "text")
|
||||
|
||||
# Test data 2: large text
|
||||
data2 = ("large_text", large_text, "text")
|
||||
|
||||
log_trace(correlation_id, f"Starting smartsend for subject: {SUBJECT}")
|
||||
log_trace(correlation_id, f"Correlation ID: {correlation_id}")
|
||||
|
||||
# Use smartsend with text type
|
||||
# For small text: will use direct transport (Base64 encoded UTF-8)
|
||||
# For large text: will use link transport (uploaded to fileserver)
|
||||
env, env_json_str = smartsend(
|
||||
SUBJECT,
|
||||
[data1, data2], # List of (dataname, data, type) tuples
|
||||
nats_url=NATS_URL,
|
||||
fileserver_url=FILESERVER_URL,
|
||||
size_threshold=SIZE_THRESHOLD,
|
||||
correlation_id=correlation_id,
|
||||
msg_purpose="chat",
|
||||
sender_name="text_sender",
|
||||
receiver_name="",
|
||||
receiver_id="",
|
||||
reply_to="",
|
||||
reply_to_msg_id="",
|
||||
is_publish=True # Publish the message to NATS
|
||||
)
|
||||
|
||||
log_trace(correlation_id, f"Sent message with {len(env.payloads)} payloads")
|
||||
|
||||
# Log transport type for each payload
|
||||
for i, payload in enumerate(env.payloads):
|
||||
log_trace(correlation_id, f"Payload {i+1} ('{payload.dataname}'):")
|
||||
log_trace(correlation_id, f" Transport: {payload.transport}")
|
||||
log_trace(correlation_id, f" Type: {payload.type}")
|
||||
log_trace(correlation_id, f" Size: {payload.size} bytes")
|
||||
log_trace(correlation_id, f" Encoding: {payload.encoding}")
|
||||
|
||||
if payload.transport == "link":
|
||||
log_trace(correlation_id, f" URL: {payload.data}")
|
||||
|
||||
print(f"Test completed. Correlation ID: {correlation_id}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
Reference in New Issue
Block a user