Wednesday, August 29, 2012

Android Layout XML - Oh my

Despite the fact that my schedule by in large is fairly awful, I have managed to set aside at least some time each week to do a bit of research / development.  I'll avoid posting which time that is lest someone try to schedule a meeting during that block but the project this week has been to look at finally doing a bit of Android development, particularly on my recently replaced iMac and the Nexus 7.  

The process of getting everything set up was not bad, a few minor glitches here and there, but nothing too severe.  Nice to see that is the case.

Beyond getting the basic Hello World example working, my first project is to try to get a Bluetooth Serial Port Profile (SPP) listener going for the purposes of instrumenting my eBike.  The eBike or really electric motor scooter (it does 60 mph, whoot) was my sanity prize for doing our departmental accreditation efforts.  That and the family vacation to Hawaii last year.  Even then, I am not sure it was enough but lest I dwell too much on accreditation and become grumpy, I'll get back to the task at hand.

Basically, the prototype variant that I had does not have the new fancy display that the current bikes ship with but it does have a serial TTL output.  I found a fairly neat Serial TTL to Bluetooth converter at MDFly and got it rigged up late last year.  At the time, I had been using a simple Serial TTL / USB cable and a bit of C# code on my notebook.  Unfortunately, the bike reared its prototype / test pilot nature this year and has largely been out of action, not allowing me significant riding time until this past week.  Current Motor has been wonderful about troubleshooting and hopefully the wonder that was troubleshooting the bad BMS (Battery Management System) led to better troubleshooting for other riders.

Now that my bike is back, I had been occasionally using a simple Bluetooth SPP application from the Play Store.  However, that is not a whole lot of fun as it does not give me GPS or a display for my bike.  I could also just settle for the simple SparkFun adapter to log things to a microSD but again, where is the fun of not having a full display :)  

The task then this week was to actually build a real Android app together with a dash of Bluetooth.  Not too bad on the core of the Bluetooth (Google's documentation is pretty sweet) but holy lord of obfuscation, the layout code is abysmal.  Granted, I am a bit biased from working with WPF and the C# side of things but wow, it is pretty awful.  I thought WPF had some awful kludges (properties and the magic XAML compilation behind the scenes) but Android layout definitely takes the cake.  I am hoping the awful obfuscation is just a byproduct of the fact that I am using sample code but I am not terribly optimistic.  

If anyone would like to educate me why I am off base and espouse the merits of Android's layout XML over WPF / XAML, feel free to do so.    

Thursday, August 23, 2012


As always, a bit late on the blog post but better late than never with regards to discussing various papers / highlights from SIGCOMM as well as the workshop that I attended, CellNet.

  • Positively wonderful opening note / tutorial.  The organizers (Li and Morley) asked Zoltán Turányi from Ericsson to give a nice over of emerging problems in cellular networks.  It was a nice twist that I think served as a good foundation for the rest of the day.  I am not sure this is viable at a general conference (versus say a key note) but it worked well for the workshop.
  • I was a bit less sanguine about the Open Radio work by Sachin Katta from Stanford.  They had a wicked cool paper at the core SIGCOMM conference but it felt a bit like cognitive radio redux.  That being said, pulling off full speed 802.11n with TI + USRP goodness is pretty awesome and I have to give them serious props for that.  We will certainly have to do a bit of looking into the TI C66x DSP that their group had been using.
With that, we had a small break for coffee though we were by far running late.  It was then that we realized that HotSDN had 180 attendees (wow!).  The good news was that we got back on time in fairly short order for the workshop.
  • Neat paper on Buffer Bloat presented by Sue Moon of KAIST who was subbing for Rhee from NC State.  Bummer was that Sprint was by far one of the worst violators of Buffer Bloat.  Evidently they have an upcoming IMC paper examining tweaks to TCP to try to improve things. Amusing anecdote of noting that AQM (Active Queue Management) is doomed out of the gate which I think is pretty much the community consensus.
  • Two papers by AT&T on the core of their network primarily from a modeling perspective.  Both were invited and some neat graphs on RNC / tower performance with regards to TCP performance.
Nice standard conference lunch though I must say, the Finlandia lunches all week were excellent.  Got a bit crowded with the whole conference there on Tuesday through Thursday.  After lunch, it was time for our talk on WiFi offloading which I can now freely discuss on the blog now that it has finally dropped.  We ended up trading with the Stanford folks so KK's adviser could catch his talk.
  • Needed more coffee for my talk and I realized I cut perhaps one too many slides in the interest of time.  Post lunch + jet lag means a far less than impressive talk.  Evidently Shu (my student) taped it but I'm not sure I will be adding it to YouTube any time soon.
  • KK Yap had a nice bit about using multiple adapters to get better performance.  They used an underlying Linux virtual switch on Android to allow one to fuse together multiple adapters and derive better performance.  He did a nice job presenting his talk.
  • Neat work on multi-path TCP definitely getting into the firm weeds, sort of a different perspective taking the standards route for multi-path TCP rather than the bit more of ad hoc approach of the Stanford talk.  
  • The session wrapped up with a talk by Suman Banerjee about their wireless bus work.  Very cool and some interesting enterprise level problems when you have numerous buses / data plans to manage.  Definitely problems that most academics are not pondering but are likely to be quite important at a macro level.  Very impressive with how far the bus work has come from several years back when he had presented it visiting us at ND.  Well done guys.
Two final talks wrapped things up with one on modeling (not necessarily my cup of tea) and one on assisted GPS, namely how bad the assisted GPS has been implemented on many devices.  Lots of low hanging fruit for vendors to fix.  The modeling issue brought up an interesting point that with 3GPP and the potential to signal back to the base station, how much should we signal back?

The workshop wrapped up with a panel discussing various issues / open items for cellular networks.  The poor panelists barely got in their slides before we started discussing each particular person's results.  The net result was that Li did not get a huge amount of time to go over his thoughts on a cellular SDN which was a bit of a bummer but the back and forth on the panel was excellent.

All in all, it was a nice workshop.  Quite well attended (though not bursting like HotSDN).  Well done Li and Morley!

Thursday, August 16, 2012

Reviews / TPCs

I'll sneak in a semi-related post with regards to TPCs (Technical Program Committee) that relates a bit to McKeown's comment about making the SIGCOMM tent bigger, not as big as SIGGRAPH but about four times as big attendee-wise as it is now.

Having recently gotten back a few sub-par sets of reviews, I am feeling a storm of grouchiness with regards to the overall review process and the on-going conference versus journal debate.  Note that this is in no way related to SIGCOMM as it has been a few years since I have submitted a paper there.  We are much better at self-filtering and as of late, most of our work has not been terribly conducive to SIGCOMM either.  Though my top negative review of all time from SIGCOMM still holds a place near and dear in my heart :)

On to my thoughts / recommendations for TPCs / reviewers:
  • I had posted on this a few year's back but I think it is still the case.  Conference papers are for interesting / thought-provoking work, not necessarily perfect nor complete work.  We had a workshop review two years ago for a paper where the reviewers were asking for essentially a journal quality evaluation rather than thinking about it properly through the lens of the workshop / conference.  For that particular workshop review, I joked with my students that if we had the answers at that time when submitting to the workshop (*cough* workshop at INFOCOM 2011 *cough*), we sure as heck would not have been submitting to some podunk workshop.  
  • Having chaired a few workshops, it makes me shake my head when one gets back any peer-reviewed paper that has less than three reviews.  That is just a failure of the TPC co-chairs at quality control and at worst case, one can always backfill reviews as the co-chair.  That is kind of your job.  Some might object and point out that the ultimate job is to ensure good papers but I would argue conferences are there to encourage feedback and discussion.  Bad reviews outside of the top tier conferences (you sort of have no choice on those conferences) makes me unlikely to submit in the future nor encourage others to do so as well.
  • More recently, there have been a few reviews coming back that contain only reviewer comments but no actual ratings, requiring one to click the link to actually see ratings.  Really?  Are we not CS profs and not otherwise capable of automating such things?    
  • A weird review recently came in where a journal reviewer for our E2E QoS work considered the number of citations with regards to evaluating the work.  Seriously?  I must have missed the memo where the solidity of the science is dependent on how many cites your conference work gets. Then again, the reviewer clearly had a bias against all things QoS mumbling about all that QoS work that thinks about bandwidth as a scalar quantity.  On the plus side, the reviewer's comments were so wrong that the review might inspire me to finally write the dynamics paper I had in mind to prove them wrong as I firmly believe the Roberts work from the mid-2000's is still as valid as ever  when looking at large scale aggregation characteristics of flows (i.e. VBR really does become CBR in the aggregate).  The big trick is how to get the data not being the employee of a large ISP.    
I'll wrap up with a note regarding a nice paper written by our chair about advising junior faculty on the role of conferences versus journals.  I won't steal his thunder but the basic gist is that the notion that systems researchers are special flowers that can only do conferences is not really supported by the data. From the conclusion of the article:

"Look for research questions that other people will care about and that you can see a novel way to approach. Do the best research that you are able to do. Publish your work in the best conferences that you can get your work into. Also publish in the best journals that you can get your work into. Don’t think either-or, think both-and.  ...  Don’t buy into the “only-conferences-matter, conferences-are-better-than-journals” viewpoint. ... the empirical evidence is that this view is wrong." 

Food definitely for thought.

SIGCOMM - Day 1 (Tuesday)

SIGCOMM comments are still being queued up and my CellNet impressions are perhaps a bit delayed.  I should have something later on today now that I brought my notebook to the conference rather than my iPad.

A few comments / thoughts from Day 1 of SIGCOMM 2012:

  • Wonderful keynote by Nick McKeown with regards to his experiences over his career.  He encouraged people to work on problems that industry did not care about (yet) and preferably those problems that make industry angry.  Certainly he is accomplishing that with SDN (Software Defined Networking) and Cisco.  The other interesting aspect is that he puts everything he does in the public domain to make collaborations easier, i.e. no IP (Intellectual Property) at risk means no issues in fighting over IP.  Plus, the bonus argument of taxpayer funds means it should be fully available / free.  
  • Interesting was the tail end of Nick's talk discussing the future of SIGCOMM and pushing the notion that SIGCOMM is still pretty insular and needs more people.  He pitched that SIGCOMM should double the number of papers and aim for 2,000 attendees.  Not entirely bad I would say.
  • Extremely uncomfortable moment during the first paper session.  One of the questioners (in typical SIGCOMM fashion) pointed out bluntly that the paper being presented should not have been accepted as someone already offers the exact service noted for $49 / month.  You could have heard a pin drop after the commenter made his remark.  He was most certainly right though but it was probably a bit much to request that everyone keep track of all various startups / efforts in the space.  Kudos to the presenter for staying quite composed during the questioning.  
  • That being said, the relative insularity of SIGCOMM citations seems still as prevalent as ever.  That paper in question failed to cite anything out of the typical SIGCOMM sphere of conferences (NSDI, CoNext, etc.).  It did seem though that this year's SIGCOMM was definitely broader in terms of the authors.  Quite a few of the usual suspects but not overwhelmingly so.  Stanford's wireless group was heavily represented and they have some pretty cool work (see Day 2 comments coming up). 
  • SIGCOMM definitely does student outreach right.  Very well done with N2Women (Day 2) and the student banquet.  Though with two hundred students in attendance, things probably got a bit diluted due to the popularity and location of being in Europe.  Bummer though that none of the industry sponsors came to the student banquet.  

To wrap up, holy number of analyzed streams by Conviva (Stoica / Zhang and company).  I am thinking that is a space we should just stay away from as it is hard to compete with 2 billion flows that Conviva analyzes.  Simply amazing but man, either I am old or something as their titles on the slides are nearly impossible to read with green text on a white background.

Sunday, August 12, 2012


I am currently out in Helsinki this week for SIGCOMM 2012, specifically to present our paper on WiFi offloading at the CellNet 2012 workshop.  It has been just over ten years since I was last in Helsinki for ICC 2001.  I am really looking forward to the CellNet workshop as our paper is probably one of the better ones that we have done in a while in terms of the utility of the results.  Perhaps not our finest writing but the results are quite profound for carriers when debating how much gain they will see for WiFi offloading.

As such, I am going to try do my best to put up impressions of several of the best and perhaps not so great papers / posters / demos from the four days of workshops and posters that I will be attending.  One odd paper on Wednesday on MIMO by Katabi out of MIT.  SIGCOMM seems a very odd place for such a paper though and I will be curious to see how much of a systems-ish focus there is in the paper versus an EE / PHY layer focus.

Friday, August 10, 2012

Content-Centric Networking

Just yesterday, there was an article on Xconomy linked via Slashdot highlighting Van Jacobsen's current work with Content-Centric Networking, aka CCN (see the original CoNext 2009 paper here).  While I have long admired all of the interesting work that Van has done (TCP tweaks, feedback on DiffServ), the whole CCN bit has me shaking my head a bit.  Is there something amazing that I am missing with regards to CCN?  Perhaps I am a bit too jaded from multicast, active networking, DHTs (shudder), and the various packet caching work from a decade or so back but I am failing to see how this is not just yet another effort doomed to fail.  

While the piece does capture some of he issues with CCN and acknowledge several of the shortcomings, I think it several underestimates the difficulty and ignores a few critical issues:

- The comments on scaling of the core are way off.  I'm a bit perplexed by the take and I assume it is just the author of the article taking some liberties.  We are not exactly strained for capacity in the core and the Olympics should serve as a pretty good reminder that we will be A-OK for some time.  That being said, it does not mean we are good at the edge, particularly the wireless edge (hopefully nobody thinks CCN offers anything in the last mile), but I view the core as a largely solved problem that boils down to one of architecture / planning.  Quality of Service (QoS) would have gotten traction a long time ago if we really were that constrained.  There is a reason why I stopped doing QoS work nearly in its entirety though every once in a while I feel the need to occasionally wax nostalgically about unsolved problems from earlier work in QoS.  

- The entrenched players are not just entrenched, they are intertwined in how the very core of the Internet works.  Not in a protocol sense but in its economic core (ads, content integrity, dynamic content, etc.).  I think all too often researchers ignore this at their own peril though there are certainly a few communities I use as illustrations for my students as poster children for these sorts of things.    

QoS / multicast are neat concepts just like CCN is in an abstract / academic sense but these sorts of efforts ignore fundamental economic realities.  There has to be a compelling efficiency gain or cost savings or competitive advantage to justify it.  For IPv6, it took a really, really, really long time for the address space to get scarce enough to justify it.  Back when I started grad school in the late 90's, IPv6 was just around the corner.  14 years later (oy), World IPv6 days are no longer needed as we might actually start deploying it (huge hat tip to the DoD for requiring it).

Fiber / capacity is too cheap and the existing CDNs good enough for this to get serious enough advantage.  Moreover, as several astute Slashdot posters (some days a rarity) point out, the economics of ads and customized content further impair.  I just don't see how this gets out of the toy projects in the lab phase.  I don't view various players looking at the work / participating in workshops as being a great indicator either, of course they are going to look at.  The key is when their Director of Operations is putting serious resources into it, otherwise it is just fun academic work.  

Some enterprising student should do a survey paper on what technologies succeeded / failed and their overall time to success / failure.  I have a sense that there is a corollary to Metcalfe's Law of Networking related to the inertia of entrenched players and the order of magnitude gain required to improve things.

- Integrity / validation is the anchor that is going to prevent CCN from creating that order of magnitude.  At the end of the day, you have to trust the content that you are getting is legit.  Yes, there are various tricks built into CCN that take advantage of new advances in security but it boils down to classic PKI / digital certificates (and verify it is still good - yeah Certificate Revocation Lists) or just hoping and crossing your fingers.  

The net result is a cap on the performance gains keeping CCN from ever offering an order of magnitude better solution over existing CDNs and making it viable.  Or it becomes even more awesome to be a bad guy in the new cloud / CCN universe.  All of my spidey security senses point to this as being a really, really bad idea and the best explanations I have heard with regards to how they make it work amount to a bunch of hand waving that again reduce back down to classical PKI / digital certificate issues.  It is not that the folks doing CCNs cannot make it work, it is that the compromises made to keep it reasonably secure will severely hamper performance gains.  Either that or I need to find some black hats to don.

- Wireless medical and CCN, really?  Let alone the minefield of privacy issues, a big challenge when it comes to digital information, particularly digital health information is control / understanding of where the data is at to properly assess risk.  We already have enough trouble securing our existing data, now we are going to scatter / replicate it so that any crypto mistakes can be multiplied without the capacity to revoke / fix mistakes once it is replicated?  I don't get the sense that a risk averse community (health care) is going to hop on board with condoning something like this, regardless of the magical cryptographic solutions.  There seems too many vectors for attack / failure and CCN would just exacerbate it.  Again, it is not an unsolvable problem, it just seems that whatever gains you get while be wholesale wiped away or wholly excluded (ex. low power computing) in the wireless medical device space as well.

- It trades abstraction beauty for operational complexity.  Let's just say that I would not want to be the poor sap stuck with trying to craft SLAs (Service Level Agreements).  Those SLAs will be awful and probably is something much closer to dark magic than actual solid math.  Putting on an network operation hat makes me shudder to troubleshoot issues.
Am I overly grumpy about CCN?  Sure.  Should I publish this blog post without being a full professor yet?  Maybe :)  Is CCN potentially cool from an academic standpoint?  Heck yes, that is the beauty of academic work.   

But it we have been down these sorts of things before and the amount of attention being paid to the topic seems to far exceed its potential benefit.  I firmly believe that the fundamental principles needed to make CCN secure will limit it from being anything more than a better abstraction and even more, the operational cost of CCN in terms of stability / troubleshooting make it a deal breaker.  

Maybe I just need to figure out a way to cache grumpiness such that I can do a local cache hit in N years to optimize things.

Faculty Job Talks - More Musing

As I had meant to get to earlier this summer and as always, things just keep slipping by, I wanted to comment a bit more about the job talk. This week I wanted to focus on how one defines their research, namely your research and why we want to hire you. From a higher level perspective, there are two key things that I need to walk away from in your job talk from a research perspective.

Skipping over the fact that one can safely assume that you are giving a good, engaging talk, here are some thoughts from an Engineering / Science perspective:

  • I need to understand where your research stands in the greater scheme of things in your field. Part of that is motivating why your problem is important but if I am not in your area, I need to understand where your work pushes the overall research envelope. Some of the best talks in doing this have used 2-D or 3-D charts (they were viz people but even non-viz people can mimic them), i.e. the two design constraints or issues are scale and security, my work pushes the envelope in scaling well and being secure. It allows us on the committee to get a sense of what sort of a research community you would fit into and give a sense to faculty who might be interested in collaborating where your research might plug / complement their own.
  • I need to understand what you personally did to advance the research. Yes, you may have been part of a team effort but particularly for a place such as ND, we tend not to have huge groups of faculty in your particular area. Hence, I need to know that you can function independently outside of the context of your prior support system. Collaborative efforts are great, but what is your part? What did you uniquely contribute or what insight of yours drove the project? It is a bit of what I would call personal cheerleading but remember, we are hiring you, not your entire team (adviser, co-workers, etc.) to come join our faculty. Trust me, your adviser / co-workers will definitely understand, they would most definitely want you to succeed in finding a position. Don't be overwhelming with braggadacio (that can rub people the wrong way too) but far too many people come in a bit too humble / unassuming. This is very hard for most folks, particularly as the trend is towards larger group and inter-disciplinary projects, but it is sort of a necessary skill to survive in today's academic environment. 
We want to hire you, give us that motivation that seals the deal at the talk as to why *your* research is great. Help me understand what you did and how it is complementary to our department.