Category Archives: Internet technology


Over the holidays (naturally) some [expletives deleted] inserted some malware into this site.  It redirected viewers to an attack site, no doubt to nefarious ends.   The vulnerability appears to have been in a WordPress 4.0 script.  We’ve updated to WP 4.1 and taken other necessary steps.

Lessons learned:

  • Web sites aren’t “set and forget”.
  • Being a low-value target is not a defense.
  • Security updates don’t happen automagically
  • Don’t trust the tools to set up protections appropriately.
  • Google and are your friends… in the “tough love” sense of friends.
  • My first incident response involved a lot of trial-and-error.  I can’t imagine how a site owner with no CS background could begin to deal with these kinds of problems.

Anyway, all clear.  Maybe someday law enforcement will catch up with these [expletives deleted].

Can we please stop talking about “Fast Lanes”? Please?

“Network Neutrality” is in the press again, after the DC Circuit Court of Appeals vacated the FCC’s “Open Internet” Order, and the FCC  began the process of creating new rules within the guidelines set out by the Court.  FCC Chairman Jack  Tom Wheeler has outlined a new set of Rules, which effectively divide the baby.  His other alternative, the “nuclear option”, would have the FCC reclassify Broadband ISPs as Common Carriers, subject to Title II of the Telecom Act.  Grass-roots, Netroots (and perhaps some Astroturf) groups are banging the drum loudly for the FCC to go nuclear.  They are incensed that the proposed rules would allow differentiated traffic handling for compensation.

The catch phrase is “fast lane”, or sometimes “toll lane”.  As in “those evil monopolists will be selling access to a fast lane on the Internet to the corporate media, and degrading the slow lane to the point of driving out independent viewpoints and entrepreneurs and democracy as we know it”.  Every news article, every editorial, every post on the subject almost inevitably includes that metaphor.   Indeed, the entire debate seems to turn on it.  And it is flat-out misleading.

The “Information Superhighway” metaphor is credited to  former Senator Al Gore Jr.  It was a tribute to his father, Sen. Al Gore Sr., who was instrumental in the legislation that created the Interstate Highway system.  It was also an astonishingly prescient  prediction of the impact which the Internet would have on our daily lives.  Al Gore is no technologist.   He needed some way to express the notion of a ubiquitous, richly interconnected data network as critical infrastructure for the 21st Century. The highway metaphor served his purpose.  And hopefully he didn’t take it too literally.

The problem is that the Internet not like a like a highway.  The behavior of a road system in carrying individual cars is completely unlike the behavior of the Internet, which carries a duality of individual packets and flows.  Modern Physics teaches that light simultaneously has a wave nature and a particle nature; similarly, traffic on the Internet simultaneously has a packet nature and a flow nature.

As an example, if a highway becomes congested, all cars slow down or stop, cars back up, and the resulting traffic jam grows indefinitely until the congestion clears.   The Internet handles congestion by dropping packets, with the expectation that the receiver will detect  missing packets from each flow,  and take their absence as an indication that a congestion event has occurred.   The receiver is then expected to instruct the sender to  send fewer packets belonging to the flow at a time.  Now, imagine a highway that handled congestion this way.   Would it have artillery pieces at intersections to blow up random cars?  Unless cars traveled in something analogous to a  flow, how would a destination know that a car had gone missing, or signal back to an origin that it should dispatch fewer cars at a time?

And that’s just one of the Internet’s behaviors.  If, to extend this thought experiment, one were to imagine a transportation system that behaved like the Internet, it would be… truly bizarre.

Other metaphors fail as well.  The late Sen. Ted Stevens was roundly ridiculed for comparing the Internet to “a series of tubes”.  That metaphor holds no better – but no worse – than the highway metaphor.  In fact, the Internet behaves like nothing in people’s everyday experience — except, of course, for those of us whose life’s work is to think about such things.  And reasoning by the highway metaphor has been the cardinal fallacy in the “Network Neutrality” debate.

Once we start thinking about the behavior of the Internet on its own terms, we can start thinking in terms of 25 years of research, standardization and experience in “Integrated Services Networks”. We can introduce the notion of “Best Effort Service” into the debate.  Best Effort is how the public Internet presently behaves.  In the packet nature of the Internet, Best Effort means that a source will send packets into the network, and the network will try to deliver them in the order they are received.  In the flow nature of the Internet,  Best Effort means that if all flows are responsive to congestion, then each will get a “fair share” of the bandwidth along its path.

Best Effort service is optimized for “elastic” flows.  An elastic flow transmits a the highest rate that it is allowed to, but doesn’t mind  adjusting its rate to match its fair share of the bandwidth along its path.   By slowly ramping up its rate, and responding to congestion signals by sharply reducing its rate, it participates a in “share and share alike” paradigm.   Web browsers, E-mail programs and remote backups are all common applications that generate elastic flows.

We can also talk about “Inelastic” flows.  An inelastic flow sends data at its own characteristic rate, with little or no ability to adjust without degrading the user experience.   Best Effort service is not particularly good for Inelastic flows and vice-versa.  Elastic flows are supposed to play nicely with each other, and cooperatively share capacity fairly.  Inelastic flows don’t know how to play nice;  they send at whatever rate they send at.  Worse, during periods of congestion, their unresponsiveness actually causes cooperating elastic flows to slow down to less than their fair share of bandwidth. Streaming video is the proverbial 800 pound gorilla of inelastic flows.

A “Premium”  service is a better way to handle inelastic flows.  Instead of a “share and share alike” paradigm, it reserves enough bandwidth through the network to handle the inelastic sender’s characteristic rate.  The network knows what that rate is, and enforces it.  The understanding is that as long as the flow’s sender doesn’t exceed that rate, the Internet won’t drop any packets.   Faster than that, all bets are off.  However, if there isn’t enough bandwidth to safely reserve for a new flow, the flow is not admitted; it gets the Internet equivalent of an “all circuits busy” signal.  All of these behaviors prevent congestion in a different manner than  Best Effort behavior responds to congestion.  Engineers will recognize elastic traffic over a Best Effort Service uses a closed-loop control system;  similarly, inelastic traffic on a Premium service is an open-loop control system.

It turns out that elastic flows using Best Effort service can peacefully coexist on the Internet with inelastic flows using a Premium service, as long as there is a large enough pool of bandwidth reserved for the Best Effort service to maintain acceptable performance.  The insight behind that is that dynamic allocation of the Internet’s bandwidth is not a zero-sum game. In fact, if anything, isolating elastic flows from inelastic flows will improve the performance of both.

This “Premium” service is the thing  that the FCC proposes to allow ISPs to offer – within limits.   It is also the thing that the Netroots condemn as a vile abomination.  This is the subject of the current brouhaha.

It is fair to note that these notions of a multi-service Internet, Premium Service, bandwidth reservation, admission control, etc.  has always been controversial in the technical community.  The argument has been that if you don’t have enough bandwidth to satisfy everybody, just get more.  It is no coincidence that I’ve never heard that argument from anybody who has had to sign the purchase order for “more”.   Still, this thinking has permeated the Netroots and probably underlies some of their opposition.

Now, Premium Service does pose some very real competitive and consumer risks.    The FCC’s big challenge is to create enough safeguards to protect against them.   I’m well convinced, as is Chairman Wheeler,  that they can do so;  I am also aware that it will be difficult, and that any loopholes will be exploited.  But that’s a story for another day.

Note:  This entry expands on a comment I wrote on the industry website Light Reading.