UltraNet: Revisiting ultrasound

Since I was a teenager, I've always thought ultrasound was very cool, and although I studied it in school, I never found anything I could use it for in the tech I've built. Recenly I decided to revist that idea.

Imagine a world where all the intelligent devices around you could seamlessly link together, sharing skills and distributing workloads through ultrasonic waves inaudible to humans. A decentralized cloud of ambient intelligence, woven into the fabric of our surroundings yet fluidly reconfigurable. This is the idea of an "UltraNet".

0:00
/1:02

Historical Context and Challenges

In the early days of personal computing, connecting disparate devices into cohesive networks was a painful. Companies closely guarded proprietary standards, forcing complex reverse engineering or exclusive partnership deals. Integrating new peripherals like printers and scanners required low-level software development. The process of binding devices into cooperative workflows was cumbersome and fragile, usually involving ducttape. Upgrading a single component could break the entire tower of interdependencies, tape on tape on tape. Delivering a reliable, seamless experience across multiple third-party products was a rllllyyyy big struggle for developers given an ever-shifting technological landscape.

A little bit sicfi and a little bit fun:

The Core Technology of UltraNet

The UltraNet represents a philosophical departure from those paradigms.

The UltraNet leverages miniaturized ultrasound transducers embedded across our electronics like phones, wearables, home assistants, and IoT gadgets. At its core, it provides a universal natural language for computational components to cooperatively interface.

World's smallest ultrasound detector is tinier than a blood cell


These transducers act as wireless modems, transmitting and receiving encoded data on reserved ultrasonic frequencies using techniques like Orthogonal Frequency-Division Multiplexing (OFDM) and Multiple Input Multiple Output (MIMO).[1][2][3]

An ultrasonic protocol handles device discovery, pairing, and collision avoidance, allowing ad-hoc mesh networks to be woven between proximate nodes without configuration headaches. This is all enabed by natural machine languaging. UltraNet' "ultrasonic virtual mechine" - is a middleware scripting engine allowing intelligent software components to directly interface using efficient symbolic communication languages. Intelligent agents embodying advanced AI models like ChatGPT can now "beam" instructions and data between one another using these VMs.

Think of this as devices using ultra sound to "speak" instructions or even code (that could become applications locally).

  • Researchers at the University of Hawaii developed MEMS ultrasound transducers as small as 300 micrometers in diameter, intended for applications like wireless endoscopic imaging.
  • Scientists at the University of Pennsylvania created even tinier "chip-based micro-unmanned nanosonic air vehicles" with diameters under 100 micrometers that can emit ultrasound pulses for communication and navigation purposes.

Piezoelectric nanoparticles/nanomaterials:

  • Lead zirconate titanate (PZT) nanoparticles around 20-100 nanometers in size have been demonstrated to generate and detect ultrasound waves when electrically stimulated.
  • Researchers have experimented with arrays of vertically aligned nanostructures like zinc oxide nanowires for ultrasonic transduction at the nanoscale.

A model trained on the whole of wikipedia:

  • Monolingual Model:
    • Parameters: 141.4 million
    • Storage Size: 141.4 million * 4 bytes = 565.6 MB

Distributed AI Collaboration

With an Ultranet future, smartphone could run bunch of background tasks and apps that communicate via ultrasound with other devices around you. The most interesting possibility is that the UltraNet could enable swarms of cooperating intelligent agents to naturally interface. Imagine intelligent apps run by advanced AI language models like GPT, each working together like separatly collaborating towards common goals. Your phone's "butler" agent could use ultrasound based commands to delegate sub-tasks to specialist agents running on other devices - your laptop's "text analysis" agent, your smart speaker's "question answering" agent, and so on. Each device's AI assistant could split itself into multiple cooperating agents, communicating via ultrasound to distribute its cognitive workload across diverse devices for enhanced capabilities. Devices would be able to "chat" with each other, and in a way we as human cannot hear, and because devices are themselfes contextually aware (running a GTP and some type of language model) they can communicate naturally. In my experimentation, I've found that the devices need to be less than 8 inches from each other to repeatably trigger sequences of GPTs with my limited Dawei equiptment.

Seamless Integration and Task Delegation

Intelligent camera apps running on your phone or AR glasses could use ultrasound to directly interface with the agents running on smart home devices, allowing seamless control through voice or gestures. You could direct cleaning robots, adjust smart appliances, or summon a cloud-based "research agent" to gather contextual information - all through the ambient UltraNet of ultrasound communication between interconnected AIs. Need a chart or diagram generated for an upcoming presentation? The agents could work together - your voice query seamlessly handed off to a text understanding agent, a visual intelligence agent generating the graphic based on contextual data from internet research agents, with the final product rendered by your laptop's high-powered "design agent." With ultrasonic mesh networking, intelligent devices and AI assistants all around us could cooperate as a decentralized hive mind, symbiotically blending their capabilities through continuous multi-agent communication and task delegation via the UltraNet. The nightmare senario of this is a shit load of Rabbit R1 devices doing decreet tasks over ultra when everything could be done on a single device.

A New Paradigm of Interoperability

Where past paradigms trapped developers in vicious cycles of constant driver wrangling and interface redesigns, the UltraNet achieves seamless, future-proof interoperability between both existing and unanticipated devices. By abstracting away the physical transports into a standardized proxy language for AI components, the UltraNet enables intelligent capabilities to be dynamically recruited and recombined as needed. This "fluid assembly" of AI components enabled by UltraNet - when you talk into a room, although you can't hear them, there is an an ambient cloud of intelligence comprised of multiple ChatGPT instances, ever re-configuring and optimizing itself by sharing skills and workloads directly through ultrasonic communication.

Nice pipe dream!
Why Not Just Use Bluetooth or Wi-Fi?

Yah yah, it's true, bluetooth and Wi-Fi are indeed more established technologies for wireless communication, however, UltraNet offers several unique advantages that make it a compelling alternative for specific use cases:

Reduced Interference

Ultrasound waves do not penetrate walls as easily as radio waves, which significantly reduces interference between devices in different rooms or areas. This makes the UltraNet particularly useful in environments with a high density of devices in extremly close proximity.

Low Power Consumption

Ultrasound transducers can be extremely low power compared to traditional radio-based communication methods. This is particularly beneficial for battery-operated devices like wearables and IoT that may only need to send a signal to trigger a GPT on another device in a spesific manner.

Enhanced Security

The limited range and directional nature of ultrasound make it inherently more secure against eavesdropping and unauthorized access. This adds an extra layer of security for sensitive data exchanges, kinda...maybe.

Specialized Communication

The UltraNet's ultrasonic virtual machine and optimized proxy languages allow for highly efficient, symbolic communication between AI models. This is orders of magnitude faster than converting data to and from human-audible audio or text.

Seamless Integration

The UltraNet abstracts away the complexities of device drivers and APIs, allowing for seamless, future-proof interoperability between both existing and new devices. This reduces the development overhead and makes it easier to integrate technologies into a cohesive system regardless of what they are. While Bluetooth and Wi-Fi are excellent for general-purpose communication, the UltraNet offers specialized advantages that make it suited for creating a decentralized, cooperative network of devices triggering GPTs.

As mentione, recent developments in language model compression and quantization, such as the Wiki-40B multilingual model fitting on a smartphone, suggest that the UltraNet vision of passing triggers between devices and devices understanding them, might possible. This is not merely the internet of things, but the intelligence of things, a network of GPTs with the LLMs working tirelessly to make your life easier, more efficient and fUNNN!!! :P

I have an MVP of this running, but it would need a lot of work to make real, if it's even resonable in terms of hardware implementation with devices at scale (it could be very hard on battery if not done thoughtfully). [4]

UltraNet Specialized Communication Language


The UltraNet requires a specialized communication language to facilitate efficient and seamless interaction between devices using ultrasonic waves. This language must be optimized for the unique characteristics of ultrasonic communication, including low power consumption, high data density, and reduced interference. Below is an outline of how this new language and its protocols would be developed and implemented, however I suspect it will be a serious amount of work.

Speed Advantages

Efficient Language Encoding

If GPTs and agents develop a highly efficient natural language, it could significantly reduce the amount of data that needs to be transmitted. This language could use optimized encoding, slang, and short sentences to convey complex information succinctly. Machines can process and understand these optimized languages much faster than humans, potentially speeding up communication.

Processing Speed

Machines can process information at speeds far beyond human capabilities. This means that once the data is received, it can be decoded and acted upon almost instantaneously. The bottleneck in human communication—our cognitive processing speed—does not apply to machines.

Parallel Processing

Machines can handle multiple streams of data simultaneously through parallel processing. This could allow for multiple conversations or data exchanges to occur concurrently, further increasing the overall communication speed.

Ultrasound vs. RF Communication

Bandwidth and Data Transfer Rate

RF communication typically offers higher bandwidth and data transfer rates compared to ultrasound. However, if the triggering language used by GPTs is highly efficient, the amount of data needing transmission could be minimized, potentially offsetting the lower data transfer rate of ultrasound, the context of the information would be stored on each device, much like we store information in our brains and use our voices to trigger the brain in communication between us.

Latency

Ultrasound communication can have lower latency over short distances due to the speed of sound in air. For close-proximity devices, this could result in faster real-time communication compared to RF, which might experience more latency due to network congestion and signal processing delays.

Interference and Security

Ultrasound communication is less prone to interference from other wireless signals, which can be a significant advantage in environments with high RF noise. Additionally, the security of ultrasound communication can be higher due to its lower susceptibility to interception and jamming.

In theory, using ultrasound for communication between GPTs and agents, combined with a highly efficient natural language, could indeed be faster than traditional RF communication under certain conditions. The key factors include the efficiency of the language, the processing capabilities of the machines, and the specific application environment. While ultrasound has limitations in terms of range and environmental sensitivity, its advantages in terms of low interference, and potential for low-latency communication when coupled correctly, make it a compelling option for this specific case. As technology continues to advance, exploring and optimizing these methods could aid in machine-to-machine communication.

WHY? Fun! And it's a good idea to trigger GPTs with clicks and beeps, and it would be great not to hear it.

Appendex

Components of the system:


Miniaturized MEMS Transducers

  • Technology: The use of miniaturized MEMS (Micro-Electro-Mechanical Systems) transducers allows devices to generate and receive ultrasonic waves in the 20-200 kHz range.
  • Integration: These transducers can be embedded in a wide range of devices, including smartphones, tablets, wearables, home assistants, and IoT gadgets.
  • Efficiency: Paired with low-power DSP (Digital Signal Processing) hardware, these transducers can efficiently process ultrasound signals, making the communication system both effective and energy-efficient.

Ultrasonic Protocols

  • Physical Layer Protocol: This defines how data is physically transmitted and received using ultrasonic frequencies, ensuring reliable communication through modulation schemes like OFDM (Orthogonal Frequency-Division Multiplexing) and MIMO (Multiple Input Multiple Output).
  • MAC Protocol: Manages device discovery, pairing, and collision avoidance, allowing multiple devices to communicate without interference, even in dense environments.
  • Network Layer Protocol: Handles routing of data packets between devices in an ad-hoc mesh network, supporting dynamic topology changes as devices move in and out of range.
  • Transport Layer Protocol: Ensures reliable data transfer with error detection and correction mechanisms, managing data flow control to prevent congestion.
  • Application Layer Protocol: Defines high-level commands and data structures used by intelligent agents to communicate, including APIs for common tasks like device capability discovery, task delegation, and data sharing.

Ultrasonic Virtual Machine (UVM)

Abstracted Computational Model

  • Standardization: The UVM provides an abstracted model for AI agents, defining the semantics and "instruction set" of the ultrasonic proxy language.
  • Encoding and Decoding: Translates high-level symbolic instructions into ultrasonic signals and vice versa, optimized for efficient symbolic communication between AI models.

AI Middleware Services

Device Capability Discovery

  • Advertisement: Devices can discover and advertise their capabilities using ultrasonic signals, facilitating capability-based routing and task delegation.

AI Subcomponent Management

  • Lifecycle Management: Manages the lifecycle of AI subcomponents, including instantiation, scheduling, and migration, ensuring efficient use of computational resources across the ultrasonic mesh.

Data Marshaling

  • Consistency: Handles the conversion and transfer of data between different AI agents, ensuring data consistency and integrity across the network.

Multi-Agent Execution Engine

Dynamic Agent Assemblies

  • Collaboration: Constructs multi-agent ensembles based on query and goal requirements, allowing for load balancing and workload distribution across the ultrasonic transceiver pool.

Agent Mobility

  • Optimization: Supports the migration of agents between devices to optimize performance, ensuring seamless execution of tasks even as devices move in and out of range.

Transaction Processing

  • Consistency: Manages transactions and ensures consistency in distributed cognition, implementing error handling and recovery mechanisms.

AI Resource Management

Resource Sharing

  • Efficiency: Manages memory, compute, and knowledge-base resources across devices, implementing power and thermal management to optimize performance.

Predictive Prefetching

  • Proactivity: Predicts and prefetches coarse and fine-grained skills based on context, ensuring that necessary resources are available when needed.

Replication and Caching

  • Availability: Replicates and caches AI models and data across the distributed mesh, ensuring high availability and quick access to frequently used resources.

AI Security Services

Capabilities-Based Security Model

  • Access Control: Governs permissions and access control for AI agents, ensuring that only authorized agents can access sensitive data and perform critical tasks.

Authentication and Integrity

  • Protection: Implements mechanisms to authenticate devices and ensure data integrity, protecting against unauthorized access and tampering.

Hardware Layer

Ultrasound Transducers

Transducers capable of generating and receiving ultrasonic waves in the 20-200 kHz range. Embedded in smartphones, tablets, wearables, home assistants, and IoT devices. Paired with low-power DSP (Digital Signal Processing) hardware for efficient ultrasound signal processing.

Ultrasonic Protocol Layer

Physical Layer Protocol

Data is encoded over ultrasonic frequencies using techniques like Orthogonal Frequency-Division Multiplexing (OFDM or A) and Multiple Input Multiple Output (MIMO). Ensures reliable communication by modulating and demodulating ultrasonic signals.

Medium Access Control (MAC) Protocol

Manages device discovery, pairing, and collision avoidance. Allows multiple devices to communicate without interference, even in dense environments.

Network Layer Protocol

Handles routing of data packets between devices in an ad-hoc mesh network. Supports dynamic topology changes as devices move in and out of range.

Transport Layer Protocol

Ensures reliable data transfer with error detection and correction mechanisms. Manages data flow control to prevent congestion and ensure smooth communication.

Application Layer Protocol

Defines high-level commands and data structures used by intelligent agents to communicate. Includes APIs for common tasks like device capability discovery, task delegation, and data sharing.

Ultrasonic Virtual Machine (UVM)

Abstracted Computational Model

Provides an idealized, abstracted computational model for AI agents. Defines the semantics and "instruction set" of the ultrasonic proxy language.

Encoding and Decoding

Translates high-level symbolic instructions into ultrasonic signals and vice versa. Optimized for efficient symbolic communication between AI models.

AI Middleware Services

Device Capability Discovery

Allows devices to discover and advertise their capabilities. Facilitates capability-based routing and task delegation.

AI Subcomponent Management

Manages the lifecycle of AI subcomponents, including instantiation, scheduling, and migration. Ensures efficient use of computational resources across the ultrasonic mesh.

Data Marshaling

Handles the conversion and transfer of data between different AI agents. Ensures data consistency and integrity across the network.

Multi-Agent Execution Engine

Dynamic Agent Assemblies

Constructs multi-agent ensembles based on query and goal requirements. Allows for load balancing and workload distribution across the ultrasonic transceiver pool.

Agent Mobility

Supports the migration of agents between devices to optimize performance. Ensures seamless execution of tasks even as devices move in and out of range.

Transaction Processing

Manages transactions and ensures consistency in distributed cognition. Implements error handling and recovery mechanisms.

AI Resource Management

Resource Sharing

Manages memory, compute, and knowledge-base resources across devices. Implements power and thermal management to optimize performance.

Predictive Prefetching

Predicts and prefetches coarse and fine-grained skills based on context. Ensures that necessary resources are available when needed.

Replication and Caching

Replicates and caches AI models and data across the distributed mesh. Ensures high availability and quick access to frequently used resources.

AI Security Services

Capabilities-Based Security Model

Governs permissions and access control for AI agents. Ensures that only authorized agents can access sensitive data and perform critical tasks.

Authentication and Integrity

Implements mechanisms to authenticate devices and ensure data integrity. Protects against unauthorized access and tampering.

Includes ethical constraints to ensure responsible use of AI. Prevents undesired behaviors and ensures compliance with ethical guidelines. By developing a specialized language and protocols tailored for ultrasonic communication, the UltraNet can achieve a new level of seamless, cooperative interaction between intelligent devices, paving the way for a future of ubiquitous and symbiotic artificial intelligence.

with love! :P