← Back to papers

Paper deep dive

LiveSense: A Real-Time Wi-Fi Sensing Platform for Range-Doppler on COTS Laptop

Jessica Sanson, Rahul C. Shah, Maximilian Pinaroc, Cagri Tanriover, Valerio Frascolla

Year: 2026Venue: arXiv preprintArea: eess.SPType: PreprintEmbeddings: 15

Abstract

Abstract:We present LiveSense - a cross-platform that transforms a commercial off-the-shelf (COTS) Wi-Fi Network Interface Card (NIC) on a laptop into a centimeter-level Range-Doppler sensor while preserving simultaneous communication capability. The laptops are equipped with COTS Intel AX211 (Wi-Fi 6E) or Intel BE201 (Wi-Fi 7) NICs. LiveSense can (i) Extract fully-synchronized channel state information (CSI) at >= 40 Hz, (ii) Perform time-phase alignment and self-interference cancellation on-device, and (iii) Provide a real-time stream of range, Doppler, subcarrier magnitude/phase and annotated video frames to a Python/Qt Graphical User Interface (GUI). The demo will showcase the ability to detect (i) Distance and radial velocity of attendees within a few meters of the device, (ii) Micro-motion (respiration), and (iii) Hand-gesture ranging. To the best of our knowledge, this is the first-ever demo to obtain accurate range information of targets from commercial Wi-Fi, despite the limited 160 MHz bandwidth.

Tags

ai-safety (imported, 100%)eesssp (suggested, 92%)preprint (suggested, 88%)

Links

PDF not stored locally. Use the link above to view on the source site.

Intelligence

Status: succeeded | Model: google/gemini-3.1-flash-lite-preview | Prompt: intel-v1 | Confidence: 94%

Last extracted: 3/13/2026, 12:20:01 AM

Summary

LiveSense is a real-time, calibration-free Wi-Fi sensing platform that transforms commercial off-the-shelf (COTS) laptops into centimeter-level Range-Doppler sensors. By leveraging 160 MHz bandwidth and monostatic operation, it enables human presence detection, gesture recognition, and vital sign monitoring (e.g., respiration) while maintaining simultaneous Wi-Fi communication.

Entities (5)

LiveSense · platform · 100%Intel AX211 · hardware · 95%Intel BE201 · hardware · 95%6G-SENSES · project · 90%MultiX · project · 90%

Relation Signals (3)

LiveSense performs Range-Doppler Sensing

confidence 100% · transforms a commercial off-the-shelf (COTS) Wi-Fi Network Interface Card (NIC) on a laptop into a centimeter-level Range-Doppler sensor

LiveSense utilizes Intel AX211

confidence 95% · The laptops are equipped with COTS Intel AX211 (Wi-Fi 6E) or Intel BE201 (Wi-Fi 7) NICs.

LiveSense supportedby 6G-SENSES

confidence 90% · This work was partially supported by the European Union’s Horizon Europe SNS JU projects 6G-SENSES

Cypher Suggestions (2)

Find all hardware supported by the LiveSense platform · confidence 90% · unvalidated

MATCH (p:Platform {name: 'LiveSense'})-[:UTILIZES]->(h:Hardware) RETURN h.name

Identify projects supporting the LiveSense platform · confidence 90% · unvalidated

MATCH (p:Platform {name: 'LiveSense'})-[:SUPPORTED_BY]->(proj:Project) RETURN proj.name

Full Text

14,691 characters extracted from source content.

Expand or collapse full text

LiveSense: A Real-Time Wi-Fi Sensing Platform for Range–Doppler on COTS Laptop Jessica Sanson ∗ , Rahul C. Shah † , Maximilian Pinaroc † , Cagri Tanriover † and Valerio Frascolla ∗ ∗ Intel Deutschland GmbH, Munich, Germany † Intel Labs, Santa Clara, CA, USA Email: jessica.sanson, valerio.frascolla, rahul.c.shah, maximilian.c.pinaroc, cagri.tanriover@intel.com Abstract—We present LIVESENSE —a cross-platform that transforms a commercial off-the-shelf (COTS) Wi-Fi Network Interface Card (NIC) on a laptop into a centimetre-level Range–Doppler sensor while preserving simultaneous commu- nication capability. The laptops are equipped with COTS In- tel AX211 (Wi-Fi 6E) or Intel BE201 (Wi-Fi 7) NICs. LIVESENSE can (i) Extract fully synchronised channel state information (CSI) at≥ 40 Hz, (i) Perform time–phase alignment and self-interference cancellation on-device, and (i) Provide a real- time stream of range, Doppler, sub-carrier magnitude/phase and annotated video frames to a Python/Qt Graphical User Interface (GUI). The demo will showcase the ability to detect (i) Distance and radial velocity of attendees within a few metres of the device, (i) Micro-motion (respiration), and (i) Hand-gesture ranging. To the best of our knowledge, this is the first-ever demo to obtain accurate range information of targets from commercial Wi-Fi, despite the limited 160 MHz bandwidth. I. INTRODUCTION While Wi-Fi was originally designed for high-speed data transmission, it also provides an unprecedented opportunity for device-free sensing through Joint/Integrated Sensing and Communication (JSAC/ISAC) [1]. Several EU-funded projects are tackling the challenging task to provide some solutions to outstanding ISAC open points, such as the MultiX [2] and the 6G-SENSES [3] projects, and part of this work comes out of activities performed under those projects. Active Wi- Fi sensing leverages commercial NICs with dual antennas to realize radar-like capabilities. By transmitting and receiving on separate antennas, not only Doppler estimation for motion detection but also, crucially, range estimation can be enabled. This unlocks a set of new applications, e.g., presence detection, hand-gesture recognition [4], device-free activity monitoring in smart environments and extended reality (XR) [5]. However, until recently, extracting precise range information from commercial off-the-shel (COTS) Wi-Fi devices was not possible: prior approaches required external Software Defined Radios (SDRs), or could only obtain Doppler velocity. The breakthrough work [6] demonstrated for the first time a practical method for accurate range and Doppler extraction from COTS Wi-Fi NICs. LIVESENSE is the first end-to-end, real-time platform to offer such unique features. It processes CSI in real time, providing centimetre-level range and Doppler estimation, sup- porting human presence sensing, range and velocity detection from moving objects as well as breathing detection. It also allows for sub-decimetre hand-gesture ranging, despite the limited 160 MHz Wi-Fi bandwidth available on low-cost, COTS laptops. A. Monostatic Sensing and Real-World Generalization While earlier Wi-Fi sensing systems predominantly adopted a bistatic or multistatic operation, requiring environment-specific calibration, recent works have begun to explore monostatic sensing over Wi-Fi to overcome this limitation. For instance, ISAC-Fi [7] demonstrated a proto- type Wi-Fi device with self-interference cancellation enabling monostatic sensing under standard communication workloads. Similarly, a recent SDR-based implementation [8] showed stable, long-duration human motion sensing (e.g. breathing) up to 10m under non-line-of-sight conditions using a single device. Building on these advances, our proposed system further pushes the envelope by operating on unmodified com- mercial laptops, leveraging 160 MHz bandwidth for high Signal-to-Noise Ratio (SNR), and achieving centimeter-level range + Doppler estimation; all without requiring per-device or per-environment calibration. Table I highlights the advantages of LiveSense over tradi- tional bistatic COTS sensing [9]. Unlike bistatic systems that require separated Transmit / Receive (Tx/Rx) pairs and exten- sive environmental calibration, LIVESENSE operates mono- statically on a single laptop. By leveraging the full 160 MHz bandwidth (512 subcarriers), our system achieves a processing gain of > 25 dB, significantly enhancing SNR against interference compared to narrowband solutions. Cru- cially, LIVESENSEe ensures seamless coexistence: it dedicates specific spatial streams for sensing while maintaining standard Wi-Fi connectivity on others. We validated this under heavy loads (e.g., simultaneous High-Definition (HD) video stream- ing and video calls), maintaining a stable sensing rate of≥ 40 Hz without packet loss, overcoming the resource scheduling challenges typically found in real-time edge sensing [10]. The TABLE I: Comparison of LIVESENSE vs. Standard Bistatic Wi-Fi Sensing FeatureStandard Bistatic SensingLiveSense (Proposed) Topology2+ Devices (Tx & Rx separated)Monostatic (Single COTS Laptop) Sensing MetricDoppler (Velocity) dominantRange + Doppler (cm-level) InterferenceSusceptible to env. noise>25 dB SNR Gain (via 160MHz) CalibrationEnvironment-specific trainingCalibration-free (Auto-alignment) CoexistenceOften drops packets/rates>40Hz stable under Video/Call load TrackingSingle target focusMulti-target (Range domain separation) DeployabilityControlled setup requiredIn-the-wild deployment (Cafes, Offices) arXiv:2603.06545v1 [eess.SP] 6 Mar 2026 Fig. 1: LIVESENSE graphical user interface real-time phase alignment of the system and static clutter removal eliminate the need for manual calibration, allowing robust operation in ’in-the-wild’ scenarios. We successfully tested our platform on > 10 distinct device Stock Keeping Units (SKUs) across diverse regulatory domains (EU, USA, Taiwan, etc.) and in uncontrolled envi- ronments like busy coffee shops and offices, demonstrating effective multi-target tracking where bistatic approaches often fail. I. PROCESSING PIPELINE Real commercial Wi-Fi hardware (COTS NICs) is not de- signed for full-duplex radar-like operation: hardware asynchro- nization (e.g., unsynchronized time/phase clocks) and strong Tx–Rx coupling, such as self-interference (SI), pose serious challenges. To address these, we adapt the signal-processing techniques recently proposed for monostatic Wi-Fi sensing [6]. Our implementation pipeline performs: • CSI Cross-Correlation for coarse frame alignment (de- lay correction), followed by sub-sample interpolation for fine delay estimation. • Per-Frame Phase Unwrapping and Correction, using the dominant Tx–Rx coupling (leakage) as a reference — this cancels sampling-frequency offset (SFO) and phase drift across frames. • Adaptive Self-Interference Cancellation (SIC), via a sliding-window background subtraction (or SI template subtraction) to suppress static clutter (static environment reflections including Tx/Rx coupling) and dynamically adapt to environmental changes. This allows operation without manual recalibration even under moderate envi- ronmental variation. A. Real-Time Sensing Algorithm Pipeline Building on this synchronized, cleaned CSI stream, the rest of the pipeline proceeds as follows: 1) 2-D FFT / DFT: We apply a Discrete Fourier Transform (DFT) over the N subcarriers (range domain) and a Fast Fig. 2: LIVESENSE IN real-time Range–Doppler heatmap Fourier Transform (FFT) over the M temporal frames (Doppler domain). 2) Detection and Post-Processing: We estimate SNR and apply a Constant False Alarm Rate (CFAR) detector to identify true reflections from moving targets (people, gestures, micro-motions such as breathing). 3) Real-Time Output / Streaming: The processed range, Doppler and motion estimates are streamed in real time, enabling interactive applications (presence detec- tion, gesture, vital-sign monitoring, etc.). Our pipeline achieves end-to-end latency of < 1 s and maintains a stable packet sampling rate of≥ 40 Hz under typical communication traffic loads. To overcome the sparse packet rates typical of standard Wi-Fi traffic, the system utilizes active packet injection (trans- mitting dummy/control packets) to sustain a stable sensing sampling rate, ensuring good performance even when the communication load is low. Figure 1 shows the Graphical User Interface (GUI) of LIVESENSE, where users can configure all sensing and pro- cessing parameters and monitor real-time outputs during oper- ation. The platform provides interactive configuration of acqui- sition and signal-processing parameters, and displays real-time outputs including range–Doppler maps (RDMs), subcarrier statistics, and presence detection. All synchronization, range estimation, and pre-processing steps operate on streaming data, while Doppler processing is triggered once M frames are accumulated. For performance stability, we use a buffer size of 4× M by default, accommo- dating different application needs and hardware capabilities. The platform supports three operational modes: • Gesture mode: Short range mode that prioritizes high range accuracy via advanced interpolation. • Presence mode: Maximizes detection range for occu- pancy applications. • Efficiency mode: Reduces computational overhead for capacity-limited hardware. LIVESENSE is implemented in Python and can currently handle CSI sampling rates≥ 40 Hz on a standard Intel Core i7-155U, leaving headroom for GUI rendering and file 0.00.51.01.52.02.5 Range (m) −0.4 −0.3 −0.2 −0.1 0.0 0.1 0.2 0.3 0.4 Velocity (m/s) Velo: -0.22 m/s, Range: 0.23 m, SNR: 20.22 dB 10.0 12.5 15.0 17.5 20.0 22.5 25.0 27.5 30.0 SNR (dB) Fig. 3: Multi-target estimation in a busy cafe environment. The system can resolve two distinct targets (0.3 m and 1 m) despite heavy background Wi-Fi traffic, demonstrating robustness against environmental interference. Input / Output (I/O). Real-time plots include RDMs, historical tracks, phase and magnitude of subcarriers, and presence- detection overlays. Figure 2 presents an example of the real- time output RDM for moving targets from the LIVESENSE platform. The demo uses a Wi-Fi signal with 160 MHz bandwidth in the 6 GHz band, with 512 subcarriers and a frame interval of 25 ms. Each Doppler batch consists of 32 frames. Under these conditions, the theoretical range resolution is 0.94 m and the Doppler resolution is 0.03 m/s, with a maximum unambiguous range of 5 m and a maximum velocity span of±0.5 m/s. In practice, however, LiveSense achieves centimetre-level track- ing accuracy through coherent integration, phase compensa- tion, and adaptive interpolation, enabling precise estimation of range and velocity from COTS hardware. LiveSense was validated across multiple device SKUs and environments. The platform maintains a median range error of < 5 cm for hand gestures (< 0.6 m) and < 20 cm for whole-body motion (< 3 m), with a max range of 4 m. Figure 3 demonstrates a crowded café scenario, where the RDM successfully distinguishes a stationary user performing gestures at 0.2 m from a second target walking 1 m behind the device, demonstrating the resilience of the system to background movement and other active Wi-Fi traffic. I. DEMO PLAN During the demonstration, attendees are invited to interact directly with the LIVESENSE platform running on a commer- cial laptop. The system continuously tracks and displays hu- man presence: as participants approach or move away from the device, a live overlay shows real-time estimates of both their distance and radial velocity. Attendees can also perform hand gestures within a 50 cm range, allowing the GUI to display the corresponding range estimations and velocity movements. To showcase sensitivity and robustness of the platform, the demo includes scenarios where participants can stand still in front of the laptop, demonstrating LIVESENSE’s ability to detect presence and small-scale vital signs, such as breathing. IV. ACKNOWLEDGMENTS This work was partially supported by the European Union’s Horizon Europe SNS JU projects 6G-SENSES (grant 101139282) and MULTIX (grant 101192521). REFERENCES [1] V. Frascolla, D. Cavalcanti, and R. Shah, “Wi-Fi Evolution: The Path Towards Wi-Fi 7 and its impact on IIoT,” Journal of Mobile Multimedia, vol. 19, no. 01, p. 263–276, Sep. 2022. [Online]. Available: https://journals.riverpublishers.com/index.php/JMM/article/view/18515 [2] X. Li, J. Pegoraro, A. de la Oliva, V. Sciancalepore, J. Brenes, V. Frascolla, N. Petreska, Z. Cui, S. Robitzsch, D. Raddino, L. Zanzi, A. Blanco, J. Gutiérrez, J. Widmer, and X. Costa-Pérez, “MultiX: Advancing 6G-RAN Through Multi-Technology, Multi-Sensor Fusion, Multi-Band and Multi-Static Perception,” IEEE Wireless Communica- tions, p. 1–8, 2025. [3] J. Gutiérrez et al., “Seamless integration of efficient 6g wireless technologies for communication and sensing enabling ecosystems,” in Artificial Intelligence Applications and Innovations. AIAI 2024 IFIP WG 12.5 International Workshops, 2024. [4] X. Yu, B. Li, and J. Chen, “Wifi-enabled gesture recognition using attention-enhanced densenet,” in Proc. IEEE/CIC Int. Conf. Commun. China (ICCC), Hangzhou, China, 2024, p. 1692–1697. [5] J. Chen, K. Yang, X. Zheng, S. Dong, L. Liu, and H. Ma, “Wimix: A lightweight multimodal human activity recognition system based on wifi and vision,” in Proc. IEEE Int. Conf. Mobile Ad Hoc Smart Syst. (MASS), Toronto, ON, Canada, 2023, p. 406–414. [6] J. Sanson, R. Shah, M. Pinaroc, and V. Frascolla, “Extracting Range–Doppler Information of Moving Targets from Wi-Fi Channel State Information,” 2025, to appear in Proceedings of the 2025 IEEE Global Communications Conference (GLOBECOM). [Online]. Available: https://arxiv.org/abs/2508.02799 [7] Z. Chen, C. Hu, T. Zheng, H. Cao, Y. Yang, Y. Chu, H. Jiang, and J. Luo, “Isac-fi: Enabling full-fledged monostatic sensing over wi-fi communication,” arXiv preprint arXiv:2408.09851, 2024, accessed: 2025-M-D. [8] A. T. Kristensen, A. Balatsoukas-Stimming, and A. P. Burg, “An SDR-Based Monostatic Wi-Fi System with Analog Self-Interference Cancellation for Sensing,” in 2025 IEEE International Symposium on Circuits and Systems (ISCAS), 2025. [9] C. Shi et al., “Demo: Device-free activity monitoring through real- time analysis on prevalent wifi signals,” in 2019 IEEE International Symposium on Dynamic Spectrum Access Networks (DySPAN), 2019. [10] X. Lu et al., “Towards wifi-based real-time sensing model deployed on low-power devices,” in 2022 IEEE 19th International Conference on Mobile Ad Hoc and Smart Systems (MASS), 2022.