WiFi Can Read Through Walls |

The number of wirelessly connected devices has been growing rapidly in recent years, making wireless signals, such as WiFi, ubiquitous. This has resulted in a considerable interest in using radio signals beyond communication, and for sensing and learning about the environment. However, while sensing with WiFi signals has shown promise for applications where there is motion (e.g., activity recognition, person identification), imaging details of still objects with everyday RF signals, such as WiFi power measurements, has remained a considerably challenging problem due to the lack of motion. Yet, imaging still objects is important for scene understanding and context inference in general, or in particular for applications in smart home, smart spaces, structural health monitoring, search and rescue, surveillance, and excavation domains, just to name a few.

In this research, we have taken a completely different approach to tackle this challenging problem by focusing on tracing the edges of the objects instead.
The interaction of an edge with the incident RF signal is dictated by the Keller's Geometrical Theory of Diffraction (GTD). More specifically, when a given wave is incident on an edge point, a cone of outgoing rays emerges according to the GTD, referred to as a Keller cone.
We then introduce **Wiffract: a new method to image the edges of still objects by exploiting the GTD and utilizing the corresponding Keller cones.** More specifically, our approach develops a mathematical model that uses the footprints that the resulting Keller cones leave on a receiver grid, in order to infer the corresponding edge angles via hypothesis testing. Once it identifies high-confidence edge points, it then propagates their inferred angles to the rest of the imaging space using Bayesian information propagation. We can finally further improve the resulting edge map using advances in the area of computer vision. We extensively test the proposed methodology with several experiments. In particular, we show how our proposed approach can enable the first demonstration of WiFi reading the English alphabet (even through walls). This application is in particular informative as the English alphabet presents complex details that can be used to test the performance of our proposed imaging system.

Here are some key features of our proposed approach:

- It only uses the radio waves of off-the-shelf WiFi transceivers for imaging.
- It
**does not require any prior RF data**for training a machine learning system for RF sensing. - It exploits edge diffraction and uses the corresponding signatures that the Keller Cones leave on the 2D receiver grid to develop a mathematical framework for edge tracing.
- In one example application, it can image and read (i.e., classify) uppercase letters of the English alphabet,
**even through walls,**thus enabling the first demonstration of WiFi reading through walls.

Here, we briefly summarize our proposed approach and show sample experimental results. See the Publications for more details.

In this work, we present a new foundation for imaging still objects with only WiFi power measurements of a 2D grid of COTS receivers. Consider the scenario shown in Fig. 1, where a fixed wireless transmitter (located at \(\mathbf{p}_t \in \mathbb{R}^3\)) emits radio signals which interact with a set of objects located at \(\mathbf{p}_o \in \Theta \subset \Psi\), where \(\Theta\) is the set of all object locations and \(\Psi\) is our imaging space of interest. The signals scattered from these objects are then captured by a uniform two-dimensional RX grid.

Figure 1. Sample imaging scenario: a transmitter emits wireless signals, while a receiver grid makes received power measurements in order to image objects in \(\Psi\). |

As discussed earlier, when a wave is incident on an edge point,

Figure 2. A sample edge interaction and the resulting Keller cone. |

If we consider a receiver grid in the vicinity of the edge, the edge leaves different signatures on the grid depending on its orientation and the direction of the incident ray. More specifically, the RX array elements at the intersection of the RX plane and the corresponding cone are the ones that receive the signal power and thus "see" the impact of that edge point. The intersection of the Keller cone with the RX plane will result in different 2-D shapes (e.g., hyperbola, ellipse, circle, etc.), formally referred to as conic sections.

Figure 3. Diffracted rays off of edges with different orientations. Depending on the orientation of the edge and the direction of the incident wave, the diffracted rays leave different footprints on the RX grid, |

We first derive the following approximation for the received CSI signal power on the RX grid, \[\bar{P}(\mathbf{p}_r) \approx 2\text{Re} \left\{\sum_{\mathbf{p}_o \in \Theta} \Lambda(\mathbf{p}_o, \mathbf{p}_t, \mathbf{p}_r) g^*(\mathbf{p}_t, \mathbf{p}_r) g(\mathbf{p}_o, \mathbf{p}_r) \right\},\] where \(\Lambda(\mathbf{p}_o, \mathbf{p}_t, \mathbf{p}_r)\) is the complex-valued amplitude attenuation and \(g(\mathbf{p}_i, \mathbf{p}_j)\) is the corresponding Green's function given by \(g(\mathbf{p}_i, \mathbf{p}_j) = e^{-j\frac{2\pi}{\lambda}||\mathbf{p}_i - \mathbf{p}_j||}\) . We then propose the following edge-based imaging kernel: \[\mathcal{I}(\mathbf{p}_m, \phi_i) = \Bigg| \sum_{\mathbf{p}_r \in RX} \!\!\!\!\bar{P}(\mathbf{p}_r)g(\mathbf{p}_t, \mathbf{p}_r)g^*(\mathbf{p}_m, \mathbf{p}_r) \mathbb{1}_{\mathbf{p}_r \in RX_{\mathbf{p}_m, \phi_i}} \Bigg|\nonumber\] where \(\mathbb{1}_{\mathbf{p}_r \in RX_{\mathbf{p}_m, \phi_i}}\) is the indicator function over the footprint of the edge on the RX grid denoted by \(RX_{\mathbf{p}_m, \phi_i}\), \(\mathbf{p}_m\) is the location of the edge and \(\phi_i\) is its corresponding angle. This kernel is implicitly a function of the edge orientations, a relationship we exploit to infer the existence/orientation of the edges via hypothesis testing over a small set of possible edge orientations \(\Phi\). In other words, the edge orientation that best matches the resulting Keller-cone-based signature (judged based on the value of the imaging kernel) is chosen for a given point \(\mathbf{p}_m\) that we are interested in imaging: \[\phi^\star(\mathbf{p}_m) = \operatorname*{arg\,max}_{\phi_i \in \Phi}\ \ \mathcal{I}(\mathbf{p}_m, \phi_i) \nonumber\]

Edges of real-life objects have local dependencies. We then model the imaging space as a graph. Once we find the high-confidence edge points via the proposed imaging kernel, we then propagate their information to the rest of the points using Bayesian information propagation. This can further improve the imaging quality, since some of the edges may be in a blind region or may be overpowered by other edges that are closer to the transmitters.

Finally, once an image is formed, we can further improve it by using image completion/classification pipelines from the area of vision. We note that traditional imaging techniques result in poor imaging quality, when deployed with commodity WiFi transceivers as the surfaces can appear near-specular at lower frequencies, thus not leaving enough signature on the receiver grid.

Figure 4. Sample experimental setup showing imaging through walls -- 6 antennas of two laptops serve as receivers while a WiFi card of one laptop is used for transmission. A vertical Styrofoam tower carrying the RX antennas is mounted on a ground robot to synthesize an RX grid in the x-z plane, on which we measure WiFi CSI power measurements from three TX antennas (of one WiFi card) simultaneously. |

We carried out extensive experiments in three different areas, shown in Fig. 5. Area 1 is an open area from all four sides, while Area 2 is open from two sides, with the other sides having pillars, walls and other objects. Area 3 is a cluttered and roofed entrance of a building, which we have used extensively for through-wall experiments.

Figure 5. Experiment areas and sample objects. Outlines of objects are highlighted with dashed lines for better display. |

**Experiments in Areas 1 and 2:**

Figure 6. Our final results for imaging 14 sample letter-shaped objects with WiFi signals. The dashed lines represent the ground-truth while the solid lines represent our image. Each imaging plane is 1.4m \(\times\) 1.4m, as marked for the first letter. It can be seen that the details of the objects are imaged well. |

**Experiments in Area 3:**

Figure 6. Through-wall reading: Our approach has enabled WiFi to image and read the letters of the word "BELIEVE" behind the wall of Area 3. The dashed lines represent the ground-truth while the solid lines represent our image. |

National Science Foundation

Office of Naval Research