20 November 2014

Upgrading a gem for newer Rails versions

How I Upgrade a Gem when Rails Upgrades

Rails is continually being improved and upgraded and goes through a moderate upgrade (Rails 4.0 to 4.1, for example) at least once a year. For maintainers of gems which others use, this also means that the gem must be upgraded and tested against the latest Rails version. I typically do my upgrades annually, in the Fall, and by that time, I've forgotten the process I used twelve months ago.
So my blog today will try to capture this process. I maintain the Milia gem for basic multi-tenanting in a Rails environment. I recently updated it for Rails 4.1. Rails 4.2 is already at release candidate status, so I will record my steps in upgrading from 4.1 to 4.2.

Process in a nutshell

  1. Read up on what has changed: release notes, upgrade blog, tutorials.
  2. Make sure that the gem's unit and functional tests are working for the older Rails version.
  3. Check out the "edge" branch in your git repository for working on the upgrade.
  4. Bump the gem version number.
  5. In your gem test area, empty the gemset, delete the Gemfile.lock.
  6. Update any Ruby, Rails version references in Gemfile, generators, docs, shell scripts.
  7. Create a new Rails project (in the new version); Gemfile references the edge branch for gem.
  8. Try to start the server for the new app; deal with deprecations, errors, references.
  9. Manually test the new app enough to be confident that most things have been caught.
  10. Repeat steps 7 through 9 until no more changes.
  11. rake test:units
  12. rake test:functionals
  13. update gem README.
  14. build gem; publish when ready

Each step explained


What has changed?

There are four primary places to get information about new versions of Rails
Each of these has helpful information to know prior to starting an upgrade process. Take notes if needed and dig deeper if any areas seem unclear or confusing.
For Milia, the upgrade was fairly straightforward from Rails 4.0 to 4.1. But in looking at the 4.1 to 4.2 upgrade, I'm noting the following possible impacts:
  • Web Console
  • Responders (might be used in the test app?)
  • Rails html scanner (test app?)
  • render with a String (test app?)
  • railties changes (test app?)

unit and functional tests working?

Might be a good idea, before going any further, to make sure that the current released gem version's unit and functional tests are working.

$ git checkout edge

Actually Milia uses a branch called "newdev" ... but same thing. Prior to this step, you might also want to freeze a branch for the current version (say, version 1.1.0) as well:
  $ git checkout master
  $ git checkout -b v1.1.0
  $ git push origin v1.1.0
  $ git checkout edge

Bump gem version number

Now that we're in edge, I like to symbolically bump the version number, so I have a record in the git trail of when work started. In this case, Milia is going to 1.2.0.

empty testing gemset; delete test Gemfile.lock

You might not have these, but Milia does because there's a stub of a Rails application which is required for running the unit tests. Best to do the deletes now so you won't forget and become confused later when entering the testing phase.
  $ rvm gemset empty miliatest
  Are you SURE you wish to remove the installed gems for /home/xxxxx/.rvm/gems/ruby-2.1.3@miliatest?
  (anything other than 'yes' will cancel) > yes
  installing gem /home/xxxxx/.rvm/gem-cache/gem-empty-1.0.0.gem --local --no-ri --no-rdoc.
  :
  $ rm Gemfile.lock


Update any Ruby, Rails version references

Update any Ruby, Rails, or other gems' version references in milia.gemspec, Gemfile, generators, docs, shell scripts, etc.
This can be a bit tricky. For example, Milia has auto generators for installing milia within a sample web app. But for that to work most correctly, the latest version of Rails also has be installed in the gemset in my development project directory.
So this stage is the time to make the first draft of potential changes based on your notes.

Here's a list of possible things to check for:

  • does rvm need to be bumped to the latest version (for latest Ruby version)?
  • latest Rails in the project-space gemset?
  • what are the potential latest versions of any key gem or app dependencies?

And here's a list of files which sometimes change when Rails changes:

  • config/boot.rb
  • config/application.rb
  • config/routes.rb
  • config/initializers/*.rb
  • config/environments/*.rb
  • any railtie.rb files in the gem


Create a new Rails project

You want to create a new project so that you'll be working with the default application scaffolding for the Rails version to which you're upgrading. Otherwise, you sometimes run into situations where you're testing on an app built against an older scaffolding, which is still supported, but might be dropped in the future. So you want to make sure that everything that defines a new version is in the app you'll be testing.
And make sure that it will reference the new version: go into the Gemfile references the edge branch for gem. There's two ways to do this.

If you're the only developer, you can have the Gemfile reference the gem (assuming the local branch is at edge):
  gem 'milia', :path => "../../milia"

Or, you can specify the edge branch on gitbhub:
  gem 'milia', :git => 'git://github.com/dsaronin/milia', :branch => 'edge'

Start and try out the new app


Try to start the server for the new app. Invariably it will be flagged with many deprecation and error references. Don't ignore these in your haste. Fix each one and try again until there are none showing. Because you're providing a gem, you want something which has been correctly upgraded for the Rails version you're supporting. Gem users don't like getting deprecation messages for things which they cannot directly control.

There's two different ways I try to get these errors. The easiest is just to run the console; it forces the initialization process and doesn't complicate anything with trying to get a web server started.
  $ rails c

After that comes up correctly, then I'll try to actually start the app via a web server.

Do some minor manual testing just to feel confident that things are running. Then move on to the unit and functional tests.

Unit & functional tests

Obviously, now is a good time to try these out, fix any deprecations and errors which arise.

Revise README and any other documentation


Build & publish the gem


Consistent way to scaffold a new milia app

Note: this is reference for me to remember how I like to start and test milia on my dev system.
  $ cd projectspace/
  $ gem install rails --pre
  $ new_project sample-milia-app
  $ cd sample-milia-app/
  $ bundle install
  $ rails g milia:install --org_email='conjugalis@gmail.com'
  $ rails g web_app_theme:milia
  $ rake db:create
  $ rake db:migrate
  $ foreman start

21 October 2014

Kinokero (cloudprint) tutorial part 2

Kinokero Tutorial Part 2: Cloudprinting

Using Google Cloudprint Proxy Connector in Ruby


Kinokero is a ruby gem which provides a complete Google CloudPrint Proxy Connector functionality. That means that it can turn any OS-connected printer into a cloudprint accessible printer, as long as the internet connection is maintained.

This post is a continuation of Part 1 Getting Started, so please check that out for prerequisites and expected environments.

A basis for building a proxy connector

Within the gem source itself, there are two places that demonstrate using kinokero: the console and the testing setup helper. The console is located at: console/twiga.rb, and the testing helper is located at: test/test/test_kinokero.rb. Because testing is kind of specific and different, using the console as a template would be better.

Console

During development, I needed a convenient setup and testing structure to individually trigger GCP primitives and see traces of the request and result. I'm calling that the "console" and it has no function inherent to the gem, other than a convenient setup and debugging apparatus. Rather than having it vanish, I have made it part of the gem superstructure and it can be run independently, as though it were an application, for calling and testing the gem.

The console has a simple means of persistence (a seed yaml file) for any printers which are registered. The seed yaml requires, at the least for initial startup, a section of data for the (required) test printer.

In this tutorial, I will try to point out the important aspects to keep when building a program to use the kinokero proxy connector.

Structural

I use both rvm (ruby version manager) together with bundler to manage my various project environments. So you'll need a Gemfile with at least the following line:
  gem 'kinokero'

then you'll need to run the bundler:
  $ bundle install

Configuration and initialization

Next, think about any configuration and initialization you'll need. The console does it in the following way, which should go at the head of your program:

main module invokes configuration
#!/usr/bin/env ruby

# *************************************************************************
# *****  app configured, initialized, and ruby-prepped here   *************
# *************************************************************************

    # make it so that our lib is at the head of the load/require path array
  $:.unshift( File.expand_path('../lib', __FILE__) )
    # kick off module configurations
  load File.expand_path('../config/application_configuration.rb', __FILE__)
  
# *************************************************************************

  require 'yaml'
  require 'erb'
  require 'active_support/core_ext/hash'

application_configuration.rb hold any basic gem invocation stuff

And config/application_configuration.rb basically is the following (of course you'll need to change to your own namespace conventions):
# *************************************************************************
# *******    general configuration for console appliance    ***************
# *************************************************************************

module Twiga

# *************************************************************************
require 'rubygems'
require "kinokero"
require 'json'

# *************************************************************************
# ******  mimic the way RAILS sets up required gems  **********************
# Set up gems listed in the Gemfile.
# *************************************************************************
# 
ENV['BUNDLE_GEMFILE'] ||= File.expand_path('../../Gemfile', __FILE__)
require 'bundler/setup' if File.exists?(ENV['BUNDLE_GEMFILE'])

# *************************************************************************

# ########################################################################
end  # module Twiga

Retrieve Persistence

Your main module will need to retrieve any existing printer information from persistence. The console uses config/gcp_seed.yml for its persistence (it needs to be in writable medium). The following routines are your basic read/write for the persistence. Also included is build_device_list which will return a kinokero-style gcp_control_hash from the seed data.

# *************************************************************************
# get managed printer data
# *************************************************************************

GCP_SEED_FILE = "../config/gcp_seed.yml"

# #########################################################################
# ##########  working with seed data  #####################################
# #########################################################################

# -------------------------------------------------------------------------

  def load_gcp_seed()
    if @seed_data.nil?
      @seed_data =  YAML.load( 
          ERB.new(
            File.read(
              File.expand_path(GCP_SEED_FILE , __FILE__ ) 
            )
          ).result 
      )

    end
  end

# ------------------------------------------------------------------------

  def write_gcp_seed()
    unless @seed_data.nil?

      File.open(
        File.expand_path(GCP_SEED_FILE , __FILE__ ), 
        'w'
      ) { |f| 
        YAML.dump(@seed_data, f) 
      }

    end   # unless no seed data yet
  end

# ------------------------------------------------------------------------

  def build_device_list()

    load_gcp_seed()    # load the seed data

      # prep to build up a hash of gcp_control hashes
    gcp_control_hash = {}

    @seed_data.each_key do |item|
      
         # strip item into hash with keys
      gcp_control_hash[ item ] = @seed_data[ item ].symbolize_keys

    end   # convert each seed to a device object

    return gcp_control_hash
    
  end   # convert each seed to a device object

# ------------------------------------------------------------------------


Command line options

If you wish to have some command line switches to control kinokero, the following function will return an options hash from the command line switches.

# #######################################################################
# ##########  command line options handling  ############################
# #######################################################################

# -----------------------------------------------------------------------

  def parse_options()

    options = {}

    ARGV.each_index do |index|
      case $*[index]
        when '-m' then options[:auto_connect] = false
        when '-v' then options[:verbose] = true
        when '-q' then options[:verbose] = false
        when '-t' then options[:log_truncate] = true
        when '-r' then options[:log_response] = false
      else
        ::Twiga.say_warn "unknown option: #{arg}"
      end    # case

      $*.delete_at(index)   # remove from command line

    end   # do each cmd line arg
      
    return Kinokero::Cloudprint::DEFAULT_OPTIONS.merge(options)

  end

# ------------------------------------------------------------------------

Invocation & instantiating a Proxy object

The console just jumps into the following code as part of the main module. Note that build_device_list automatically loads and parses the seed yml file, if it hasn't already been loaded. As kinokero processes this device list, it will automatically setup and activate any logical cloudprint printer which is designated as active and it will establish an on-line connection with GCPS unless the :auto_connect option is false.

     # start up the GCP proxy
  @proxy = Kinokero::Proxy.new( build_device_list(), parse_options )

    # remember the device list for easy access
  @my_devices = @proxy.my_devices   # not necessary to extract locally

Register & remove a printer

These are the most basic two actions which a Proxy is expected to perform. To update persistence, the console has several helper methods: update_gcp_seed, add_gcp_seed_request, and write_gcp_seed. These can be found in the console and are not included here, as they are not germane to the main usage of kinokero.


# ------------------------------------------------------------------------

  def do_register( item )
    new_request = build_gcp_request( item )

    response = @proxy.do_register( new_request ) do |gcp_control|

      update_gcp_seed(gcp_control, gcp_control[:item] ) do |seed|
        add_gcp_seed_request( seed, new_request )
      end  # seed additions

    end   # do persist new printer information

    unless response[:success]
      puts "printer registration failed: #{response[:message]}"
    end

  end


# ------------------------------------------------------------------------

  def do_delete( item )
    item = validate_item( item )
    @proxy.do_delete( item )
    @seed_data[item]['is_active'] = false
    write_gcp_seed()
  end

# ------------------------------------------------------------------------


# if item hasn't yet been defined in seed data, create one out of
# thin air by using test as a template
# ------------------------------------------------------------------------

  def build_gcp_request( item )

    use_item = validate_item( item )

    return {
      item:  item,
      printer_id:   0,  # will be cue to create new record
      gcp_printer_name: "gcp_#{item}_printer",
      capability_ppd: @seed_data[use_item]['capability_ppd'],
      capability_cdd: @seed_data[use_item]['capability_cdd'],
      cups_alias: @seed_data[use_item]['cups_alias'],
      gcp_uuid:         @seed_data[use_item]['gcp_uuid'],
      gcp_manufacturer: @seed_data[use_item]['gcp_manufacturer'],
      gcp_model:        @seed_data[use_item]['gcp_model'],
      gcp_setup_url:    @seed_data[use_item]['gcp_setup_url'],
      gcp_support_url:  @seed_data[use_item]['gcp_support_url'],
      gcp_update_url:   @seed_data[use_item]['gcp_update_url'],
      gcp_firmware:     @seed_data[use_item]['gcp_firmware'],
    }
  end

# ------------------------------------------------------------------------

  def validate_item( item )
    return ( @seed_data.has_key?(item) ? item : 'test' )
  end

# ------------------------------------------------------------------------

Putting it all together

All the above sections are part of the main module in the console, console/twiga.rb.  So you can put these together in any manner that makes sense to your purpose.  Most of the methods shown above are merely helper functions to prep the required kinokero data structures.

17 October 2014

Kinokero (cloudprint) tutorial part 1

Kinokero Tutorial Part 1: Getting Started

Using Google Cloudprint Proxy Connector in Ruby

Kinokero is a ruby gem which provides a complete Google CloudPrint Proxy Connector functionality. That means that it can turn any OS-connected printer into a cloudprint accessible printer, as long as the internet connection is maintained.

The gem itself includes separate classes to handle the GCP server protocol (Cloudprint), the GCP Jingle notification protocol (Jingle), and a class for interacting with CUPS devices on a linux system (Printer). Persistence is expected to be handled by whatever is invoking Kinokero. The initial beta release of kinokero is working for CUPS-style printer on linux-like OSs.

Parts 1 and 2 of this tutorial will show how to get kinokero up and running in a ruby application, focusing only on the highest Proxy-level, and will not show how to interact directly with the lower-level Google Cloudprint Services (GCPS), such as takes place in the Class Cloudprint.

The kinokero gem itself has a small working code to invoke the gem. It is contained in the console folder and represents the best template for getting the gem working. This tutorial will cover the preparation (Part 1) and the key points of invoking the gem (Part 2).

Note: I use Ruby 2.0 in an RVM environment on an Ubuntu (linux) workstation, so this tutorial will be specific to that type of environment. You will also want to make sure you have registered, with CUPS, a working printer on your *nix system.

Registering with Google APIs; setting environment variables

You'll need a client ID for your proxy for obtaining OAuth2 authorization codes, as the GCP documentation points out:
The client ID can be obtained as explained  here ». [relevant portion shown below] Client IDs don't need to be unique per printer: in fact, we expect one client ID per printer manufacturer.

Before your application can use Google's OAuth 2.0 authentication system for user login, you must set up a project in the Google Developers Console to obtain OAuth 2.0 credentials, set a redirect URI, and (optionally) customize the branding information that your users see on the user-consent screen. You can also use the Developers Console to create a service account, enable billing, set up filtering, and do other tasks. For more details, see the Google Developers Console Help.
Specifically, you'll end up with four items that you'll need to put into the environmental variables accessible by your application. This is done as environmental variables for security concerns, so that the values won't appear in any public repositories for either the gem or your application. See below for names and sample data. I put these into my .bashrc file. This only needs to be done once for no matter how many proxy connectors you wish to invoke.

  export GCP_PROXY_API_PROJECT_NBR=407407407407
  export GCP_PROXY_API_CLIENT_EMAIL="407407407407@developer.gserviceaccount.com"
  export GCP_PROXY_CLIENT_ID="407407407407-abcd1abcd2abcd3abcd4abcd5abcd5ef.apps.googleusercontent.com"
  export GCP_PROXY_CLIENT_SECRET="someSECRETencryptedValue"

The CLIENT_SECRET will be a typical encrypted gibberish.

On a personal note, using the Google Developers Console to get these values was not straightforward. So you may have trial & error wrong turns as you go about trying to coax these values out of the Great And Wonderful Wizard of Oz.

Adding a resource for persistence

The Kinokero proxy requires the invoking code to provide persistence for GCPS-issued information (such OAUTH2 tokens, printer id, etc). One of the reasons for this is when (or unexpectedly if) the entire machine running the proxy is restarted, the cloudprint printers, which had already been registered, need to be brought on-line with GCPS. This is accomplished through persistence of the critical information.

The console itself relies on a yaml file for persistence ( console/config/gcp_seed.yml ).

Kinokero has a primary hash which is required by several of the classes within the gem. In the README, this is called 'gcp_control' hash, and is discussed in detail in that document. The yaml seed file is used to prep this hash prior to instantiating a Proxy object.

Adding seed data for a test printer

To set up the seed data for your test printer, go to your OS and discover the name of an attached printer.
  $ lpstat -v

device for laserjet_1102w: hp:/net/HP_LaserJet_Professional_P_1102w?ip=192.168.1.65
device for lp_null: ///dev/null

In the sample shown above, there are two printers registered: "laserjet_1102w" and "lp_null." The former is the only working actual printer, so this would be chosen to be the test printer. We'll next need to discovery the full path to the PPD file for that printer. On an Ubuntu system, it will be: /etc/cups/ppd/laserjet_1102w.ppd . You will need to convert this into the gcp v2.0 required CDD format. Google has a handy converter that makes that easy. You can access it here ». Once you've converted the file, name it and place it in: /etc/cups/cdd/laserjet_1102w.cdd .

You'll also need information about the actual device: manufacturer, model, firmware version number, serial number for the printer (aka uuid), and some URLs for setup, support, and updates for the printer. I'm not sure how GCPS uses these, if at all, at this time.

Armed with this information, edit console/config/gcp_seed.yml, obviously replacing with your actual values in the appropriate places. The following attributes are required prior to registering the test printer.
  item: 'test',
  cups_alias: 'laserjet_1102w',
  gcp_printer_name: 'gcp_test_printer',
  capability_ppd: '/etc/cups/ppd/laserjet_1102w.ppd',
  capability_cdd: '/etc/cups/cdd/laserjet_1102w.cdd',
  gcp_uuid: 'VND3R11877',
  gcp_manufacturer: 'Hewlett-Packard',
  gcp_model: 'LaserJet P1102w',
  gcp_setup_url: 'http://www8.hp.com/us/en/campaigns/wireless-printing-center/printer-setup-help.html',
  gcp_support_url: 'http://h10025.www1.hp.com/ewfrf/wc/product?product=4110396&lc=en&cc=us&dlc=en&lang=en&cc=us',
  gcp_update_url: 'http://h10025.www1.hp.com/ewfrf/wc/product?product=4110396&lc=en&cc=us&dlc=en&lang=en&cc=us',
  gcp_firmware: '20130703'

In Part 2, we'll show how to instantiate a kinokero proxy object and register a printer.


27 March 2014

What you should know about the Google Cloud Platform

GCPLive Event Summary & Review



The Google Cloud Platform team is making a major marketing push to spread the word about GCP. On March 25, 2014 they held GCPLive in San Francisco, the first of a 27-city global roadshow to highlight their cloud platform. As the developer and provider of a modest web application ( “Majozi” ) which does automatic duty-roster scheduling, I attended GCPLive because I am interested in GCP as a possible home for my app. Bottom line: I will definitely be giving GCP a try for my application.
This review of GCPLive will consist of: a description about the uniqueness and strong points of GCP, an assessment of some possible weaknesses (“challenges?”) of GCP, and a critique of the event itself. The perspective here is written from my viewpoint, as an engineer and as a potential customer.
Background
I am not a high-end huge enterprise developer. I have had experience with production apps on EngineYard, Heroku, Amazon, and other PAAS hosts. Agility and ultra rapid prototyping are key to me. I have a low tolerance for clumsy interfaces, awful documentation, and inability to support DRYness[1].
Majozi has an interesting backend called the Rostering Engine, which assigns people to duty assignments when creating a duty roster typically for a 3-month period and about 300+ roster duties. Rostering[2] is an NP Hard problem involving a massive combinatorial space on the order of 10 to the 20th power. You can understand my interest in tapping a scalable computing space, even though not even Google’s infrastructure could possibly evaluate every possible roster combination within the timeframe of the existence of our Solar System!


What is unique about GCP?

The marketplace already has several IAAS cloud platforms: Amazon’s EC2, Rackspace; IBM cloud server, and Microsoft. And it has several PAAS providers: Heroku, EngineYard, etc many of which run on the mentioned IAAS providers.




Is Google’s Cloud Platform (GCP) just another cloud service? 
Hardly. To view it as that is to totally miss the point and the potential. To say that GCP is competing with EC2 is to me laughable. Media and tech bloggers who merely compare the two as equals in the same foot race are short sighted.

Google’s internal platform with the Google toolbox exposed



What’s different? GCP is more than just an infrastructure highway like EC2. Much more. GCP’s overview page clearly states the hands down winning argument for using GCP: It runs on Google’s infrastructure. This infrastructure “returns billions of search results in milliseconds, serve[s] 6 billion hours of YouTube video per month and provide[s] storage for 425 million Gmail users.” That’s a powerful track record.
But David, you say, doesn’t Amazon’s amazing and powerful e-commerce prowess also run on its EC2? (Actually, I don’t recall Amazon ever making that claim, so correct me if I’m wrong). Even if that were the case, it is not in the same computational league as the things which Google Services achieve every second throughout the global interwebs and with more massive amounts of data.
This GCP infrastructure depends on four key components: Global Network, Storage, Redundancy, and Cutting-edge computer science services.

Why wait for the next Google Whitepaper to be turned into a Hadoop?

It’s this final point that I want to highlight as putting GCP in a league of its own. “Google has led the industry with innovations in software infrastructure such as MapReduce, BigTable and Dremel. Today, Google is pushing the next generation of innovation with products such as Spanner and Flume. When you build on Cloud Platform, you get access to Google’s technology innovations faster.” Meaning, you won’t have to wait for a Google whitepaper to be turned into another Hadoop to take advantage of cutting-edge technology.
This is the real kicker and something that none of the other providers can give. GCPLive gave demonstrations of some of those capabilities. It’s being widely acknowledged that the immediate future of applications -- mobile and otherwise -- lies in contextual awareness and predictive responses. As Google exposes more of its innovations into GCP, developers will better be able to use the same building blocks to give their applications similar capabilities. That is a strongly compelling reason to choose GCP for any interesting project.
Already GCP offers APIs for Translate (over 1000 language pairs) and Predictive. I can envision that eventually BigBrain (the deep neural net capability used for image recognition and speech recognition) will be one of the GCP available services. That’s a powerful incentive to start putting even middling applications on GCP now. Imagine what your engineers could offer your customers if they have access to GCP’s predictive, contextual, and signal recognition capabilities? Even if that’s not now in the current market requirements for an application, it soon will be.

Failure is the norm

Google’s infrastructure has been industry leading for the last ten years: its data centers are the best, its reliability beats everyone else, its global speed and responsiveness and massive scaling beat everyone else. GCP brings that capability to any developer’s doorstep. GCP is the internal Google platform exposing the Google toolbox for developers.
Google’s infrastructure has been designed around the concept that “Failure is the norm.” Hardware is NOT the path to reliability; software is. They plan for failure instead of reacting to failure. For example, at the conference, they demonstrated Live Migration: the ability of the infrastructure to switch hardware nodes in an instant at the sign of a failure. The demo involved streaming an HD video which didn’t drop a frame during the switchover.
My web application runs on Heroku, a PAAS, which in turn runs on Amazon’s EC2 IAAS. Over the last three years, Amazon has had at least three major breakdowns and cloud failures, one of which lasted for over 12 hours, making my (and many other more prominent) applications go dark. EC2 is good; GCP is great. One of EC2 failures was so clumsy and severe, it forced EngineYard to add in its own ability to dynamically shift any of its customer’s applications between EC2 data centers. That’s a strong admission that they cannot trust EC2’s ability to cope with failure. And isn’t that supposed to be part of an IAAS?

Quick overview of the GCP offering

The Google Cloud Platform website contains a better description with fancier graphics, so I’m just going to cover a few points to illustrate scope.

GCP: compute, storage, networking, services

Compute
GCP offers a continuum between flexibility (IAAS) and agility (PAAS), with four different gradations between. For those needing full flexibility to define the environment, one can set up a VM to taste, and use that image when scaling. 
GCP brings the ability for automatic scaling to massive size to meet demand. This has been covered in other press releases demonstrating the ability to scale within a few seconds to dynamically handle changing query loads up to 1m qps.
Consistent performance is also a given. Currently, my middling app on Heroku suffers from extreme swings in responsiveness, depending upon what other applications are doing on the shared slice. Google engineers have smoothed out those wrinkles for consistent and reliable responsiveness no matter what the load is.

Storage

Google’s Cloud Store, Cloud SQL, and NoSQL Cloud Datastore offer a full range of choices. The conference demonstrated the ability to continuously add 100K+ rows per second without affecting realtime performance analysis of the resulting data using BigQuery.
Networking
This takes advantages of many of Google’s internal innovations: load balancing and Google’s own cloud DNS, Google’s own fiber network between data centers (and now encrypted!).
Services
This is the especially exciting part: BigQuery, Cloud Endpints for RESTful application interfaces, a translate API, and a predictive API. GCP also offers free & fast connects to all Google Services.
Green
Google is carbon-neutral, invests heavily in carbon-alternative energy sources, and has the industry-leading PUE for its data centers (amount of energy that goes into the actual server computing vs overall data center energy usage including cooling).

Room for improvement



GCPLive introduced many new features, especially a simplified makes-sense pricing structure, pricing reductions, and alternate levels between IAAS and PAAS involving accessible Virtual Machines (VMs).
It’s obvious that I am enthusiastic about GCP in general and intend to begin porting over my application on an experimental basis. In this section, however, I will point out a few areas that I think the GCP could improve. I do not expect my opinions to be universal amongst all developers, nor do I expect GCP to adopt all of the suggestions. I would be pleased if the thrust of these suggestions possibly reveals some blind spots in GCP engineer’s thinking and assumptions and they begin trying to correct that.

App Engine languages



Unsurprisingly enough, three of the App Engine’s four standard languages are Google’s own internal bias: Java, Python, and Go. PHP rounds out the fourth. But little Heroku offers: Ruby, Java, Node.js, Python, Clojure, and Scala, many of which are used in modern rapidly prototyped applications. At the very least (ur hmm, my bias), Ruby should be part of GCP’s standard mix, (but don’t lock in the versions!).


Practical progamming's state-of-the-art



Why? Ruby and especially Ruby on Rails has led practical programming innovation over the last several years. The Ruby community has made AGILE and DRY development methodologies the expected norm; they strongly incorporate a full range of test structures; have standard structures (RVM, bundler) to partition and specify ruby and gem[3] versions required for a given project; have evolved Rack and Metal, pluggable middleware frameworks for applications; and have popularized rapid prototyping with Rails, a standard web application framework that influenced similar frameworks for PHP, Perl, and other languages.
For anyone serious about computer science, Ruby and Ruby-on-Rails have made numerous important contributions to the art and practice of robust software engineering.
Computer Science theory needs to be coupled with industry-leading software engineering methodologies to produce great systems. It appeared that Google engineers (at least the ones I encountered on Tuesday), weren’t aware of this prior art for practical engineering. GCP -- both the App Engine (PAAS) and the Compute Engine (IAAS) would benefit from this influx of nutrients from the Ruby & Rails worlds.

GCP App Engine vs Heroku capability comparison

In particular, PAAS-provider Heroku has pioneered easy-to-use but extremely flexible cloud platform usage on their Cedar Stack. Let’s look at a few of these capabilities (warning, non-comprehensive list approaching):
  • a CLI toolkit (‘gcloud’ vs ‘heroku’ toolkits); kudos to GCP team, but the GCP primitives could be richer.
  • CLI should be cron-able (heroku has become weak on this; GCP is unclear about this).
  • git-to-deploy automation (which heroku has had since 2008); GCP also has and richly supports ability to change production on the fly. Big kudos to the GCP team for this.
  • dashboard (kudos, GCPs is way better than heroku’s)
  • add-ins (GCP is way behind heroku’s rich set of add-in partners) for email, SMS, databases, log monitoring, exception handling, SSL, DNS, etc. Add-ins are part of a DRY mentality for rapidly developing rugged applications.
  • data backups? recovery? importing/exporting? mirroring between production and staging? all this is unclear in GCP. It might be there and I haven’t uncovered it yet. In heroku, it’s very clear and an integral part of the offering.  Here, I’m not so worried about Google losing my data as much as the need for recovery from a user or a developer error. As a developer, I like the ability to grab the relevant 12-hr backup to debug an error locally on my dev machine. These stretch back over two weeks.
  • Postgres. Google is dedicated to open source and postgres is one of the most robust and high-performance SQL-compliant open source DBMS around. Having used both MySQL (GCPs Cloud SQL basis) and Postgres in production systems, I’ve encountered far fewer “issues” and quirks with Postgres. Heroku’s cloud implementation of Postgres is so advanced they have made it a DAAS (database as a service) offering. I realize that there are many major applications running on MySQL. Having both as part of CloudSQL would be good. I understand that I can provision a VM with postgres, but then the built-in backup situation is unclear and probably rests on my shoulders. Not very DRY.
  • Background queue processing: heroku automatically supports several background queue worker methodologies. I can seamlessly specify background workers for my background queue and know that the queue will be handled correctly. It just works, out-of-the-box. DRY.
  • A full scope of capabilities for PAAS. Here, heroku excels; see more>>. In time, I hope to see GCP moving in this direction of usability and completeness. IAAS flexibility is nice, but DRY PAAS agility means faster time-to-market. For startups, this could be a crucial choice. Innovative startups choosing GCP and then bursting into success will drive adoption of GCP more than trying to persuade old school enterprises to forego their own data centers. Google’s DNA is that of start-ups.

Other areas of improvement

  • bundler-like capability to automatically specify and lock in component versions for a project; GCP’s roll-your-own (reinvent a wheel) approach doesn’t make sense when the open source prior-art is so advanced.
  • database backup & utilities, both inter-cloud, intra-cloud, and cloud-to-local
  • rack-like standard middleware framework
  • staged rollouts for new versions (thus obviating a need for maintenance mode)
  • A/B testing
  • better clarity about suggested production, staging, test, and development environments and seamlessly switching between any.

GCP Live Event



This is my critique of the event itself, so we’re switching gears from engineering to marketing!

Overall

I was thrilled to be able to attend and want to thank Google for the great hospitality, the food, and the event itself. Seeing it live is so much better than on-line streaming. All of the Google people and the event staff were kind, gracious, and enthusiastic. I was particularly pleased whenever a Google engineer showed a genuine interest in the Majozi Rostering Engine and its particular way of dealing with combinatorial complexity.
The high level of all the presentations was amazing. I live in Oakland and keep programmer’s hours (typically finishing at 2am and waking at 10am), so getting up at 6am to attend the event was the middle of my night! My nature is that when I get bored, I immediately fall asleep. Despite being sleep-deprived, this didn’t happen. I had to stay super alert to catch the rapid-fire information, terminology, and meaning during the presentations. Well done GCP team!
Special thanks to Googlers Eric Johnson, Andrew Jessup, Bill (kernel guy introduced to me by Benjamin, the marketing guy), Brian Dorsey, X -- another engineer from Seattle (sorry forgot your name but you were very helpful). If you read this, please send me your names or Circle me on Google+.
Jeff Dean and Urs Holzle at Fireside Chat

Presentations

The entire keynote was great. Urs was great, the demos were awesome, the various portions were good. The presentations I attended were: Compute at Google, GCP and open source, New runtime support on App Engine, and the Fireside chat. The Fireside chat with Urs and Jeff Dean was awesome. Good questions and moderation by Fred.
Font-size on the presentations was too small. Guy Kawasaki gives some great tips on presentations and the 30-pt font rule should be followed.
I’m a big Google+ fan and user, but why no love for G+ live streaming posts during the event? Twitter only? Also would have liked more prominence for the GCP Google+ Page. I should have been live resharing some of the information instead of trying to type it. I do like the sound-bite posts that the Page has. The GCP Google+ Page should almost be a full media kit: photos, info, background, etc. Works for fans too.

Q&A’s

I loved the questions and answers. It was great not having media asking silly divisive click-bait questions as sometimes happens at Google I/O. The developers in attendance really were into the subject matter and were themselves at a high-level of skill. That made for a better overall conference.

Schedule & logistics

Overall, good, but allowing only five minutes between sessions is not enough, especially given the venue (see below). I was glad for the lunch and break times.
  • For some reason, my Attendee badge wasn’t prepared and ready, although my name was on the list. Having a handwritten badge, with a pen and not a Sharpie, was embarrassing throughout the day. Why not have the ability to print a badge label for those cases when a badge isn’t ready? CloudPrint anyone? Failure is the norm, right?
  • Badges: bigger font sizes please. Since the badge hangs so low, it was very hard to read people’s names and companies.
  • Recharging stations great! thanks
  • WiFi availability & usage: great! thanks!
  • Sound system (microphones in particular) had too many glitches. Needs more practice with live mics before the event.
  • GCP Marketing staff (kudos to Benjamin & Bryant) was great, friendly, and good at connecting people together.

Food

The food was wonderful and I’m glad it was easily vegetarian-oriented. Family-style seating and serving worked nicely and helped break the ice. Thanks. The brownies some of the best ever. Thanks also for the Tcho chocolates! The after-event party at Tank18 was good also.

Venue

Big minus on the venue: Terra Gallery was not appropriate.
  • Venue was too far off the main mass transit lines.
  • The waiting line to enter at 8:30 was right next to the incredibly powerful stench of the rotting garbage in the dumpsters. People were covering their noses, fanning themselves while waiting in line.
  • One small restroom only for each gender .. really? Maybe there was one upstairs but it wasn’t obvious. The ladies were happy, however, to finally see the men lined up in a long queue instead of them. That’s why five minutes between events wasn’t enough. Ten would have been better. And more restrooms.
  • Only one stairway to get from Stage 1 to Stage 2; yes, a second way opened up, but having to traverse outside, through the parking lot & elements, then back inside.
  • Coat/bag check would have been nice, especially since it was a cold, rainy day.
  • Pillars in the two event rooms obstructed the view too much.
  • Having a lounge area directly behind the Stage 1 audience area probably didn't work out as intended: the noise level was distracting from the presentations.
  • Main screen was too low; people’s heads obscured the lower ⅓ of presentations.
  • Speaker often stood in front of the screen, blocking significant portions if one was seated in the center section.
  • directional signs weren’t obvious enough; they blended too much into the GCP style and didn’t stand out. they were too low and didn’t have enough contrast to proclaim their information.
  • It escaped me that the keynote was going to be upstairs in Stage 1. The stairs to get there were part of the escape EXIT, so it didn’t seem like part of the venue.

Conclusion

I’m enthusiastic about GCP. I think it’s ready for prime time. I highly recommend it to anyone considering doing interesting applications. I plan on experimenting with porting my application to a GCP VM and seeing what it’s like to run it on GCP.
The GCP team should be more aware of solved and open source technologies often used in PAAS, such as coming from the Ruby community: bundler, RVM, Rails, Rack, etc etc.
GCP’s App Engine functionality could still learn much from Heroku and really needs to have Ruby & Rails support out-of-the-box.

And finally, if you’re anywhere near one of the 27 cities for the up-coming GCP roadshow, do attend. You’ll be glad you did!

Footnotes
1. “Don’t Repeat Yourself” coding mantra for reusable modularity & code context-aware macros.
2. Also called NSP (Nurse Scheduling Problem) in Computer Science literature.
3. Ruby Gems: community-developed open source modular code plug-ins & macros