I just noticed that researchers from the University of Minnesota used my WiFi and ZigBee transceivers to prototype their novel approach to physical-layer cross-technology communication.
Their paper won the best paper award at ACM MobiCom 2017.
Zhijun Li and Tian He, “WEBee: Physical-Layer Cross-Technology Communication via Emulation,” Proceedings of 23rd ACM International Conference on Mobile Computing and Networking (MobiCom 2017), Snowbird, UT, October 2017, pp. 2-14.
[DOI, BibTeX, Details…]
Yay! My paper A Systematic Study on the Impact of Noise and OFDM Interference on IEEE 802.11p was accepted for the IEEE Vehicular Networking Conference 2017.
The paper is about an experimental study that compares the impact of noise and interference on IEEE 802.11p.
Whether there is a significant difference between noise and interference is very relevant for networks simulations, which often use very simplistic simulation models for the physical layer.
If you are interested, the paper is available here, the code for the GNU Radio simulations is available on GitHub, and the modifications of the WiFi driver, will soon be available on the project website.
It’s also the first paper of my bachelor student Fabian Missbrenner.
I hope there’s more to come from him :-)
Bastian Bloessl, Florian Klingler, Fabian Missbrenner and Christoph Sommer, “A Systematic Study on the Impact of Noise and OFDM Interference on IEEE 802.11p,” Proceedings of 9th IEEE Vehicular Networking Conference (VNC 2017), Torino, Italy, November 2017. (to appear)
On 29 September 2017, I will participate in Probe: Research uncovered at Trinity College Dublin.
I will be around at Café Curie where Marie Curie Fellows have the chance to chat about their work and give short talks.
Hope to see you there.
I’m very happy that my paper summarizing the implementation and validation of my IEEE 802.11a/g/p transceiver got accepted for IEEE Transactions on Mobile Computing.
Bastian Bloessl, Michele Segata, Christoph Sommer and Falko Dressler, “Performance Assessment of IEEE 802.11p with an Open Source SDR-based Prototype,” IEEE Transactions on Mobile Computing, 2017. (to appear)
When I was trying to make the WiFi spectrum visible and audible in an earlier post, I was not really happy with audio part.
Back then, I was playing some sines in ProcessingJS and did a simple ADSR (Attack-Decay-Sustain-Release) cycle per frame.
Yesterday, my friend asked me why I didn’t simply connect it to Garageband.
Good question… so I changed the implementation to send midi notes to Garageband, where it can play various synths.
And really, this is light years better.
I was playing a bit around with gr-fosphor for audio.
To make it look nice, I tried to increase the number of samples per second without changing the spectral shape.
(And no, that’s not just upsampling.)
That’s probably a stupid approach, but it worked somewhat.
What would you do if you were stranded on a lonely island?
I guess, learn how to prepare staple food – at least that’s what I did.
But seriously, in Ireland I just found toast, but no real bread.
After I listened to the German CRE podcast episode about bread, I gave it a try.
Still pretty bad, but already better than toast :-)
The breads are with sourdough and without yeast.
So only water, flour, and salt. Nothing else.
Marcus Mueller and I just gave a GNU Radio workshop at the Software Defined Radio Academy, held in conjunction with the HAMRADIO exhibition.
We were not sure how many people would show up since we had only 11 registrations, but, in the end, the room was completely full with about 80 people.
Great to see so much interest in SDR and GNU Radio also in the amateur radio community.
About 35 people had a laptop with them and were following us, creating an FM receiver that we successively extended for voice and, finally, APRS.
While we deliberately excluded installation, we heard that quite some people had issues.
That was not a problem for the workshop since Marcus prepared a live image that could be booted from an USB stick, but maybe we should consider making a GNU Radio install party next year.
In case you are interested, the material for the workshop is on GitHub.
I hope people had some fun at the workshop and will have a closer look at GNU Radio.
I always wanted to try some alternative approaches to make the spectrum visible and audible.
In the last days, I was playing a bit around; not with the whole spectrum, but, at least, with WiFi channel 1.
SpectroPhone – Channel 1 in C Major
What happens here?
The canvas on the right is a live representation of the WiFi traffic on channel 1.
Of course, I cannot see the content of the (mostly) encrypted signals.
But I can record the signal strength, the size, and the hardware address of the sender.
The hardware addresses are not assigned randomly.
Each manufacturer gets a certain range allocated by IEEE.
A list of these address ranges is available on their website.
With this list, I can map each WiFi signal to the manufacturer of the chip that sent it.
What can I see?
For each signal that I overhear, I place a circle randomly on the canvas.
I use the following mapping to represent the WiFi traffic:
- Color: manufacturer
- Radius: amount of data
- Fade duration: signal strength
Radius and fade duration are straight forward.
To map the manufacturer, I hashed the name to an RGB color.
(That’s the same thing as picking a random color for the first signal and sticking to the choice for the following.)
I took some screenshots in different environments.
Left: Dublin’s buses use Realtek chips (yellow).
Right: Apple is green. Here, my mobile phone was uploading something when I came in range of the university network.
Left: The “green college” uses infrastructure from Dell and D-Link, which were (by chance) both mapped to greenish colors.
Right: The “pink streets” around the parliament.
Overall, the number and the size of frames indicate spectrum utilization, while the colors show the diversity of the hardware.
What can I hear?
To create an audio representation of the WiFi spectrum, I hash each manufacturer to a note in the c major scale.
The WiFi signals then play the note.
That means each WiFi signal increases the volume of the note proportional to the amount of data that it transmits.
After that, the volume decays exponentially.
If there many signals from a manufacturer, the note will stay active as each signal pushes the volume up.
If there are no signals, the volume goes down.
In the video, you might have noticed a constant beat from the small dark green circles.
They are beacons from Virgin Media routers that are deployed everywhere in my neighborhood.
I took another walk through the spectrum.
This time I created a starfield-like animation to visualize the frames.
It doesn’t always render at full frame rate.
So no happy eyeballs yet :-)
In addition, I down-scaled the volume a bit.
The idea was to not clip at maximum volume if there are only beacon frames.
That way, actual data transmissions would increase the volume, which was not always the case before.
It didn’t work too well though.
Apart from that, I found it a bit unintuitive that frames that had very similar colors could produced totally different tones.
Therefore, I mapped the colors first to their hue and used the hue to map to notes on the tone scale.
This is a bit like mapping the spectrum of the rainbow to the scale.
I’m really excited to join SRS (www.softwareradiosystems.com) to work on srsLTE, their Open Source LTE implementation.