Tuesday, September 12, 2017
Differential Serving on Firebase Hosting (and moving to Medium)
Tuesday, August 29, 2017
Polymer Summit 2017
In this post I will try to summarize my highlights and takeaways from this summit. There are also a couple of links that go into deeper details for the various topics, and I will be exploring some of those topics more closely in the future, which will most likely result in more articles.
Tuesday, June 13, 2017
Polymer in Production / Part 2 - Building, bundling, lazy-loading
Continuing from my previous blog post about including web components and Polymer in a huge legacy web application, I want to focus on optimizing the performance of your web app using the Polymer CLI and at least the L part of the PRPL pattern in this post. There are several things you can do to improve the initial load time, even if your app doesn't follow the recommended app shell architecture.
Thursday, May 4, 2017
Doing more with your Google Location History
location_history_json_converter (or latitude_json_converter as it was called back then) which thanks to several contributions from the open source community has turned out to be a rather useful and powerful tool to prepare your takeout data for further manipulation and visualization.Since I've recently used the tool myself again for a travel report (more about this below) and I never actively promoted or explained the tool, I've decided to put together this blog post to tell you about the tool and some samples of what can be done with it.
Monday, March 13, 2017
Polymer in Production
For my own personal projects and several other internal projects I did at work, I have grown to love web-components using Polymer because of the ease of development and the natural way to structure applications into (re-usable) parts.
The existing web application in question had been mainly developed with a once (and still) very popular JavaScript library.
A full rewrite of the application was out of question due to time and budget restraints, but using Polymer would have a lot of benefits for the future as far as testability, maintainability and extensibility are concerned.
In this blog post I will go over some of the things I did and had to consider to make this work.
Wednesday, August 10, 2016
PolymerCubed
And while there are several vendors offering solutions in this area, I decided to give it a try myself and started creating a suite of Polymer elements.
Wednesday, May 25, 2016
I/O 2016
Monday, April 18, 2016
Polymer and the [hidden] attribute
hidden attribute is a "fairly new" convenience attribute (fairly new = not implemented in IE<=10) to hide page elements that are not relevant in the current context/state of the website.It is especially useful in a Polymer web app, since you can use attribute binding to show/hide elements based on (computed) properties, without having to make your own
display: none; styles.There are several cases where you will have to be careful with this attribute though.
Tuesday, March 8, 2016
Polymer on Blogger
I've had some fun over the past few weeks to force Polymer to work on Blogger, or rather to force Blogger to work with Polymer, and here are my results, some of which might be more useful than others.
A quick disclaimer before we get started:
This post definitely falls more into the "because I can" category than in the "because you should" category, and would need some extensive testing and tweaking before being used out in the wild.
Thursday, February 11, 2016
I See... People
https://scarygami.github.io/people-api-demo/
This demo will fetch one "page" of results from the people.connections.list method and display the raw JSON for each contact.
You can then click on "load full data" to fetch the rest of the contact information via people.get for each contact.
Some takeaways from what I've seen so far in no particular order.
The "resourceNames" are interesting
If you want to fetch the data for a Google(+) account, instead of just using the numeric ID, you will have to usepeople/IDThis is easy enough to get used to, but makes you wonder what other resources they might have planned to include in this API.
The data structure is confusing to look at
To be fair JSON isn't really meant for human consumption, but to be able to work with it programmatically you first have to understand it.Each person has one or more sources where the data comes from.
In most cases there will be two:
CONTACTfrom your Gmail contact informationPROFILEfrom the public Google(+) profile.
{
...,
"metadata": {
"sources": [
{
"type": "CONTACT",
"id": "gmail_id"
},
{
"type": "PROFILE",
"id": "gplus_id"
}
],
"objectType": "PERSON"
},...
}
For each "type" of data (e.g. names, photos, urls, emails, phone numbers, ...) the response will have an array, and each item in this array comes with metadata to show which source the information comes from. While this makes perfect sense for easily parsing and displaying the information programmatically it results in a rather lengthy JSON response. This block here is only for two email addresses:
{
...,
"emailAddresses": [
{
"metadata": {
"primary": true,
"source": {
"type": "CONTACT",
"id": "gmail_id"
}
},
"value": "email1@gmail.com",
"type": "other",
"formattedType": "Other"
},
{
"metadata": {
"source": {
"type": "CONTACT",
"id": "gmail_id"
}
},
"value": "email2@somewhere.com",
"type": "other",
"formattedType": "Other"
}
],...
}
Google+ profile images are broken
A bug that will hopefully be fixed soon, but for now the profile photo URLs that come fromPROFILE give a 404. Interestingly profile photo URLs from CONTACT work, as do cover photo URLs.No access to "private" profile data even if you are allowed to see it
That was one of the biggest problems with/complaints about the Google+ API's people.get method as well. Even if you are using authenticated calls you only get the public Google+ profile information which doesn't include the private/limited data you might see when visiting someone's Google+ profile. Unfortunately that hasn't changed with this API...No Google+ contacts
The people.connections.list method only shows Gmail contacts, and none of your Google+ contacts even if the plus.login scope is included in authentication. So if you want to work with Google+ contacts you will still need to use the people.list method of the Google+ API. And then you might as well use the people.get method of the Google+ API to get the rest of the information as well. The one benefit you get with people.get in the new API, is that any private information that has been added via Google Contacts will be displayed along with the Google+ profile information.No more GData!
And after all my complaints one positive thing to say as well. If you've been using the old GData Contacts API you should switch to this new API asap. I think everyone who has been forced to work with GData will be happy to never see it again ;)So to summarize my thoughts:
Great replacement for the old Contacts API, not really adding much value when working with Google+ contacts.
Curious to see what further features (if any) are planned for this API.
Tuesday, January 26, 2016
Polymer in a corporate network
(a.k.a. The Things You Do For Money)
1. Microsoft Windows
The main OS in big corporations (at least in Austria) is still Microsoft Windows and usually you can't just install another OS on company hardware. However, as it turns out, all the tools needed are readily available on Windows, so it might be a unfair to list it among the problems. Consider this part mainly as a summary of what I use for developing with Polymer.Node.js (with all web development tools being Node-based these days) is pretty well supported across all platforms. It comes with npm as package manager that lets you install other tools you will need like Bower or Gulp.
Git (for fetching Bower dependencies) has a Windows installation that also comes with a bash emulator, that is so much nicer to use than the Windows command prompt.
As for text editors you have a wide variety to choose from, personally I like Sublime Text but have also used Notepad++ quite frequently.
2. Group Policies
Having installation files available for Windows is nice, but you might not actually be allowed to install non-standard applications on your PC thanks to all settings and permissions being managed through Active Directory and Group Policies. If you are nice to your local IT department they might make an exception and give you local administrator rights or install custom applications for you, and luckily I have a very nice local IT department ;)But in many cases not even the local IT department can help you, depending themselves on a global team that manages all permissions, and then you will have to start looking at portable solutions that you can simply put anywhere you want without having to install anything.
Here's one possible approach to put your whole development environment on an USB stick.
- Download the latest version of PortableGit and "install" = extract it to any location you want.
- Create a
usr/local/binfolder in the same location - Download the latest node.exe (from win-x64 or win-x86), which is all that Node.js needs to run, and put it in
usr/local/binfolder you just created. - Download the latest npm release and extract it into
usr/local/bin/node_modules/npm/ - Copy
npmandnpm.cmdfromusr/local/bin/node_modules/npm/bin/tousr/local/bin
git-bash now you can already use npm to install Bower and any other node modules you may want to use with npm install -g bower$HOME which defaults to C:/Users/YourUser/home/YourUser folder in your PortableGit location. To prevent messing around with the default config scripts (which will be overwritten when updating PortableGit) you can create a batch file to temporarily (so we don't mess up other applications that need the %HOME% environment variable) set HOME to that folder before starting git-bash.setlocal set HOME=%~dp0home\%USERNAME% git-bash.exe endlocal
You can have different settings for different usernames that way, or if you prefer hard-code the username in the batch file so it will always refer to the same home folder.
Some bower commands (especially bower init) and probably others sometimes have problems with Mintty that the current version of git-bash uses as terminal emulator, so sometimes you might have to use bash.exe directly. You can use another batch file for that.
setlocal set HOME=%~dp0home\%USERNAME% bin\bash.exe -login -i endlocal
3. Corporate Firewall
With all of this set up, you might already have stumbled across another problem in the previous step when trying to runnpm install -g bower since the corporate firewall most likely blocked that request.https_proxy environment variables so once you know which proxy to use (easiest by looking at the Internet options IE) you can set those withexport http_proxy=http://company-proxy:port export https_proxy=http://company-proxy:port
.bashrc file in your /home/YourUser folder and put the commands in that file, which will be executed everytime you start bash.cd ~ touch .bashrc notepad .bashrc
And with this I'm all set to bring Polymer goodness to the company, let's see where this journey takes me. I'll expect some more blogposts along the way :)
Tuesday, August 4, 2015
Custom elements for Chrome Apps APIs
The "problem" with elements that depend on Chrome Apps APIs is that you can't test/use them outside of Chrome Apps, so I went ahead and created some gulp tasks to make things easier for me.
The main idea of these tasks is to put the contents of
demo or test, which you would normally run directly, and all dependencies into one Chrome app that would then use demo/index.html or test/index.html as main page.First I take all the files relevant for the element itself + the test/demo files, run the html files through crisper and put the result into the output
components/my-element/ folder (following the layout of gh-pages for Polymer elements).All bower dependencies are run through crisper as well and put into the
components folder.For the Chrome App itself only two files are important.
manifest.json defines the necessary permissions (e.g. to use chrome-storage the "storage" permission is required).main.js launches the test or demo page.The gulp task copies those two files to the main output folder, and changes the
window.create call to point to the right file, e.g at components/my-element/test/index.htmlThe Chrome demo and test apps created that way can then be loaded as unpacked extensions.
This works nicely for the demo app, but unfortunately the test app reveals this in the console when starting it:
Uncaught Error: document.write() is not available in packaged apps.Investigating the problem reveals this line in the web-component-tester to cause the issue, which makes sure that all dependencies are loaded before WCT is actually started.
To work around this issue you have to include the necessary scripts in the test files explicitly...
...and tell it not to load any scripts itself...
...before loading
web-component-tester/browser.js on all the test pages.So that I don't have to copy the same couple of lines into each of the test files separately, I extended the gulp-copy task to automatically insert the necessary lines in all files that include a reference to
web-component-tester/browser.jsAnd with this change tests can be run in the Chrome App.
Following the idea of my previous article I also wanted to enable live-reload for this workflow
As opposed to my article where the
livereload.js is removed for the production build, I did it the other way round here by adding it to the test/index.html or demo/index.html when running the gulp-live-task.Watching for changes, rebuilding the app if necessary and triggering the reload works basically the same though.
And with this I can leave the test and/or demo apps running and see right away if all tests still pass after making changes and if the demos work as expected.
And now back to actually working on my app. All those distractions you run into while traversing (mostly) uncharted waters ☺
Thursday, July 30, 2015
Live-reload for Polymer Chrome Apps
- Change some code
- Run the code through crisper because the Content Security Policy for Chrome Apps doesn't allow inline scripts
- Reload the Chrome App from chrome://extensions/
- Repeat
After a bit of googling I found this nice article by Konstantin Raev that deals with the problem of live-reload for (non-Polymer) Chrome Apps and offers a straight-forward, working solution,
using tiny-lr and his own adapation of livereload.js (to work around some Chrome Apps security restrictions).
Using this gulp task will update or reload the Chrome App when any of the source files change, and you can just load your source folder as unpacked extensions, launch your app and start developing/testing:
While this works great for "normal" Chrome Apps the issue with Polymer Chrome Apps is that they at least need one extra crisper step to get all the JavaScript out of the .html as separate .js files.
My first lazy approach was to listen for any changes in the source folders, then run a full build and use the dist folder as unpacked extension.
A full build as per the Polymer Starter Kit involves quite a few steps, like minimizing the css/js/html, optimizing the images and vulcanizing the elements:
The only extra step you have to add in addition to what the Polymer Starter Kit does, is crisper after the vulcanize:
In the dev task (to be started with
gulp dev) I first run a build, and then repeat the build step whenever something changes in the app folder. The build creates files in the dist folder (which is loaded as unpacked app), and livereload is triggered by listening to changes in this folder.Of course this approach has several issues. Not only can the build sometimes take quite a while for even small changes, but you also get a minimized, vulcanized app, which can be terrible for debugging.
So instead I added a simplified dev-build that basically only copies all the files to a `dev` folder (to be loaded as unpacked app)...
and runs crisper on all .html files to get the .js parts out of the elements and their dependencies.
While working on that part I encountered an issue where gulp-crisper would ignore all folder structure and e.g. put all files directly into
dev/bower_components/ instead of dev/bower_components/polymer/. This issue is now fixed so make sure to use the newest version 0.0.5 of gulp-crisper.When watching for changes I also don't update everything, every time something changes but listen for specific changes and only update the necessary parts.
And to prevent live reload to trigger for each single file change (and the build process creates several file changes for each source change) I'm using gulp-batch, collecting all changes in a batch before sending the info to tiny-lr.
Here's a quick video of how this process looks like now.
So with all of this done I can now proceed to work on my Polymer Chrome App after having learnt far more about gulp than I originally intended ☺
Thursday, June 18, 2015
Data Binding vs. Event Handling
Let's take a look at a simple sample of a login. Using the
google-signin element you could wait for the google-signin-success event to trigger and then retrieve/display information about the authenticated user and toggle the UI accordingly. Of course then you also have to handle the reverse case if a user signs out by listening for the google-signed-out event.But the
google-signin element also offers an is-authorized attribute / isAuthorized property that you can bind to and observe. Toggling the UI based on this property is as simple as adding hidden$="[[!isAuthorized]]" and hidden$="[[isAuthorized]]" to elements you want to show/hide. No extra JS necessary for this, as opposed to before where you had to set isAuthorized in the event handlers.To retrieve user information once authorization has been granted you could add an observer to
isAuthorized, but I think the much nicer solution is to make user a computed property that depends on isAuthorized. Whenever the value of isAuthorized changes this will re-evaluate the function and set the user property accordingly.
Let's take this sample a bit further. In many cases you will have to retrieve some more information from your server or elsewhere about the authenticated user. So you would need to trigger some request once the user is signed in and handle the response once it is available. In this sample I'm using my
discovery-api-elements to fetch information about the user from Google+, but you can do something similar using iron-ajax or any other data-fetching element.Instead of triggering the request manually, what you can do is binding the
auto property (at least for discovery-api-elements or iron-ajax) to the isAuthorized property. Once isAuthorized and with it auto becomes true the request will be triggered automatically and you just have to handle the response.But this won't remove the data in case the user signs out. To achieve this we make the data that is displayed (
activities) a computed property that depends on both the response from the data-fetching element and the isAuthorized property.Here's what happens now, when a user signs in:
google-signinsetsisAuthorizedto true.- This sets the
autoproperty on the data element which triggers the request. - Once the request completes
plus-activities-listsets theresponseproperty accordingly. - This change triggers recomputing
activitieswith the_parseActivitiesfunction. - Once there are items in the
activitiesarray, they will be displayed by thedom-repeat.
google-signinsetsisAuthorizedto false.- This triggers recomputing
activitieswhich will be set to an empty list.
Friday, June 12, 2015
Polymer Quicktip - Attributes vs. Properties
This issue imho comes mainly from the fact that the element docs generated via
iron-component-page only list the JS property names, but in many/most cases you will use the HTML attribute names in your markup that aren't listed anywhere.Example from the
google-signin element:
If you try to include this element in your page like this
<google-signin clientId="MY_CLIENT_ID"></google-signin>it won't work because the
clientId attribute will be mapped to a clientid property that doesn't exist and clientId will stay undefined.The correct way to use the element would be:
<google-signin client-id="MY_CLIENT_ID"></google-signin>So if you encounter issues with properties not getting the value you intended make sure your attribute names are correct.
Essentially the attribute name is converted to lower case first, and then dashes are converted to camelcase,
SoMeThInG becomes something and SoMeThInG-ElSe becomes somethingElse.For those interested, here's the part of the Polymer library that takes care of translation between attribute names and property names:
https://github.com/Polymer/polymer/blob/master/src/lib/case-map.html
And if you are really curious you can have a look at
Polymer.CaseMap._caseMap to see what mappings are being used on your site.
Thursday, June 11, 2015
Polymer Quicktip - debounce
This is useful if you have a compute- or time-heavy function that depends on several (published) properties and needs to be executed when those properties get a new value, e.g. if you need to create a new ajax call depending on several parameters.
Here a simple sample to demonstrate this behaviour.
First the element without debounce:
Including this element as
<without-debounce property1="foo" property2="bar"></without-debounce> will trigger the function twice when the element is first loaded, and even if you change both properties at the same time you still get two function calls.Here the same element with the debounce functionality added:
Using this element the
console.log will only be called once when the element loads and also only once when properties get changed during a definable time (300ms in this case). This causes a small, but mostly ignorable delay before the actual execution of the function.An element that uses this functionality is the
iron-ajax element to prevent executing the actual request until all properties have "finalised".I'm using the same behavior for the same reason in my
discovery-api-elements.
Monday, June 1, 2015
The Photos Dilemma
In the beginning there was Picasa
Picasa Web Albums which is still available today comes with a fully-fledged API with read & write access to fully manipulate and organize photos. Admittedly the old GData APIs aren't the nicest to work with compared to modern APIs, especially for client-side applications in JS, but the API still does its job today.Probably the most useful API calls for read-access, since the documentation can be a bit confusing:
Request a list of albums:
https://picasaweb.google.com/data/feed/api/user/{{userid}}Info about one album:
https://picasaweb.google.com/data/entry/api/user/{{userid}}/albumid/{{albumid}}List photos in an album:
https://picasaweb.google.com/data/feed/api/user/{{userid}}/albumid/{{albumid}}Along came Buzz
Here's a blog post for those who still remember the good old times:http://googlephotos.blogspot.co.at/2010/02/photos-in-google-buzz.html
Google Buzz didn't really change much about how Picasa Web Albums and the associated API worked, it mostly seemed like Buzz was using the API itself to achieve all the features.
One feature that was introduced was the concept of "Photos from Posts" that automatically created special albums in Picasa for each post with photos you shared to Buzz. Those albums could be recognized in the Picasa Web Albums API with the
<gphoto:albumType>Buzz</gphoto:albumType> tag they had assigned in the album description.Funny enough photos shared directly in posts on Google+ today still generate "Buzz"-albums.
On the plus side...
With Google+ we got a new UI for managing photos that in many ways still is more cumbersome to use than the old Picasa Web Albums UI. But the album and photo IDs matched so it was easy to use the Picasa Web Albums API for programmatic managing of your Google+ photoshttps://plus.google.com/.../6155360510478436241/6155360509668083954
https://picasaweb.google.com/data/.../6155360510478436241/.../6155360509668083954
https://picasaweb.google.com/...#6155360509668083954
With Google+ the new concept of sharing albums to circles was introduced. Those albums would show up with
<gphoto:access>private</gphoto:access> and you could (and still can) retrieve the information about what people and circles albums were shared with by requesting the acl of an album:https://picasaweb.google.com/data/feed/api/user/{{userid}}/albumid/{{albumid}}/acl
This would show information like this:
<entry> <gAcl:scope type='group' value='...'/> <gphoto:nickname>Photo Share Test</gphoto:nickname> </entry> <entry> <gAcl:scope type='user' value='...'/> <gphoto:user>116...</gphoto:user> <gphoto:nickname>Scarygami Test</gphoto:nickname> </entry>
Google+ also introduced Instant Upload (or Auto Backup as it is called) creating a new automatic album with the
<gphoto:albumType>InstantUpload</gphoto:albumType> tag. As with "buzz" the "InstantUpload" name stayed in the API even after the name was changed in the front endAt the Drive-In
Things started to get a little bit weird with Google Drive integration.It began with the feature to show (some but not all) photos stored on Google Drive in the Google+ Photos UI, with each Drive Folder that contained photos getting their own album.
Those albums wouldn't show up when requesting the list of albums from the Picasa API, but you could request some information and the photos inside if you copied the album ID from the corresponding Google+ URL (https://plus.google.com/photos/.../albums/{{albumId}})
Things got even more confusing when Google(+) Photos were added to Google Drive. This allowed you to add a folder to your Drive which would include all the photos you uploaded and shared on Google+ sorted by year and month. You can then go ahead and re-arrange/edit the photos as you want, but... the sync is one-way and one-time only, meaning that changes done on Google Drive won't be reflected back to Google Photos and you only get the originally uploaded photo in Google Drive without any changes that you might make in Google Photos at a later point.
You can access those photos using the Drive API using the
files.get and files.list methods, and you also have write access using insert/update/patch methods, and the Drive API being one of the newer discovery-based APIs is much nicer to work with than the antiquated Picasa API. But it won't help you in managing your Google+ Photos since the data isn't synced, and there is no indication whatsoever in the file meta-information that the files originally came from Google+. The photos in Google Drive also have completely different IDs than the ones you could use in the Picasa API, they are completely decoupled.New and shiny?
And so we reach the present with the new Google Photos UI to replace the Google+ Photos UI.Since there are several essential features missing, like the possibility to add geotags, I've been thinking about creating some extensions/scripts to do some of the things via the Picasa API. The problem is that Google Photos invented completely new IDs for photos and album that don't match the corresponding IDs in the Picasa API, even though the photos and albums still show up there.
The Picasa IDs show up nowhere in the page source so they could be parsed, and the Google Photos IDs don't show up anywhere in the Picasa API which makes finding a matching photo to work with in the Picasa API nearly impossible. You could parse some meta information (like date/filename) from the Google Photos page and try to find a match in the Picasa API but that is (a) bound to break regularly as the Google Photos page gets updated and (b) potentially requires a lot of API requests until you get where you want. But that seems to be the only possibility at the moment to get some programmatic access to your photos, or you could completely forget about Google Photos and continue using Picasa Web Albums and the API to manage your photos, only using Google Photos for uploading/backing up/editing/sharing photos.
Talking about sharing: with Google Photos the main way of sharing albums is to create a "secret link" that can be shared and viewed by anyone who has the link. That also means that all albums created with Google Photos now will always show up with
<gphoto:access>private</gphoto:access>.Sharing to Google+ still allows you to share to circles/people without creating the shareable link, and those access permissions are still visible in the Picasa API.
The Picasa API gets a little bit confused though when sharing publicly to Google+. Those albums show up as private in the API, and are shown as "Limited, anyone with the link" in the Picasa Web Albums UI. To make things a little bit more confusing those publicly shared private albums show up in the API even when not authenticated as the owner of the album:
Example of a public private album in the API
A New Hope
It's been almost 4 years now since a blog post about a potential Google+ Photos API was leaked.While being read-only (as most of the Google+ API) this seemed like a promising start to replace the antiquated Picasa Web Albums Data API. But nothing ever happened there anymore and with Google Photos now getting decoupled from Google+ the Plus API doesn't seem to be right place to add such an API.
As discussed above the Google Drive API probably won't be a good home for new photos features either since there is no sync happening after the initial upload, even though it would be possible to represent most metadata related to albums/sharing/editing using custom file properties.
So it seems that we still have to wait for a separate photos API and try to use the Picasa Web Albums UI now as long as it is still working. The minimal functionality I would wish for is a way to map Google Photos IDs to Picasa IDs...
For lack of a better place you might want to star this feature request and maybe add a comment about what you would want to do with a Photos API and what features you are expecting to see in such an API.
Alternatively/additionally you can also use the feedback option in the new Google Photos site/app to tell Google you care about such an API.
Monday, May 18, 2015
Preparing for Polymer 1.0 - hangout-app
Now Polymer has reached beta state with the 0.9 release and 1.0 is expected to come out at I/O so the time of breaking changes is slowly coming to an end. Some of my projects will probably forever remain like they are now, but I thought it was about time to start updating some of my more important (imho) elements, starting with my
<hangout-app> element, that makes developing Hangout Apps easier.While migration is generally easy thanks to migration guide there are still some things I've stumbled over (mostly because I flipped through the migration guide too quickly...)
No inheritance from custom elements (for now)
hangout-app element to create their own hangout apps, so they could depend on the loaded property of the parent element to know when the Hangouts API is ready to be used:hangout-app element anywhere in your project and either wait for its ready event to fire or bind to its loaded property. Alternatively you can also include any of your markup as content of the hangout-app element and this content won't be rendered until the Hangouts API is ready to be used.
Conditional templates
The old<template if="{{condition}}">...</template> implementation removed/added DOM elements completely when the condition changed, which could have a negative effect on performance if used excessively, and I have to admit that I used it way too much in my projects, simply because it was easy to use and made the code somewhat clearer.As I wrote a while ago the much better solution in most cases is to simply hide/show by conditional attribute binding to the
hidden attribute: <div hidden$="[[!condition]]">...</div>In the case of the
hangout-app element I wanted to make sure that none of the content that might depend on the Hangouts API is part of the DOM until the API is ready, e.g. when using the hangout-shared-state element which tries to called the Hangouts API as soon as it is attached. For that reason I used the new implementation of conditional templates in the form of dom-if.
<template is="dom-if" if="[[loaded]]"> <content></content> </template>This new implementation by default adds the content the first time the condition becomes true and afterwards only shows/hides the elements as necessary.
Layout attributes > Layout classes
I completely missed this part of the migration guide and was very surprised when my layout didn't look the way I expected it to.The change from attributes to classes is easy enough though, just make sure to include
PolymerElements/iron-flex-layout in your dependencies.That's it for now, more coming as I upgrade more of my elements ☺
Tuesday, April 21, 2015
Google Sign-In 2.0 - Server-side
User verification
Probably the simplest case is when you only want to verify on the server-side who the currently signed-in user is, e.g. to load user-specific data/settings for them. For this you can use the most basic sign-in implementation, securely send the ID token to the server and use one of the Google API Client Libraries to verify the token and get user information from it.On the client side you wait for the sign-in success event to trigger, get the id_token from the authenticated user and send it to your server. You should always send the id_token via HTTPS for security reasons. On the server side (in this case using Python with Flask) you use the Google API Client library to verify the id_token and then use the information you get in what ever way you need. Please note that in this case you won't be able to make calls to Google APIs on behalf of the user. Here's the information you can get about the user from the id_token: I would highly recommend to read this article about ID-Tokens.
Optional server-side offline access
If you offer a web-service that will do something on behalf of the user while they are not online, I would recommend to make this an opt-in service after the user has signed-in.E.g. if your service offers sending news to a user via the Google Glass Mirror API, they could sign-in to your website first, pick the news categories they are interested in and then "flip a switch" to enable "offline access".
For this you would have the normal basic sign-in flow on the client-side. You can then use the ID token as before to check if the user already has offline access authorized (i.e. you have credentials stored for their user ID already). If there is no offline access yet you can display an extra button to go through the
grantOfflineAccess flow to get a one-time code which can be exchanged for access and refresh tokens on the server side.
On the server-side you can then use the client-library to exchange the code for credentials that can be stored to act on behalf of the user at any point.
grantOfflineAccess will always cause a pop-up to show for the user requesting offline access. This is the only way to get a refresh token, also in case you lost a previous one.
Necessary server-side offline access
If your service won't work without offline access (would be curious to hear your use-cases here) and you don't want your users to go through two sign-in steps, things get a little bit more difficult on the client-side (while you can still use the sameserver.py as above).
You can't use the default sign-in button for this, since this flow always runs without granting offline access. Instead you have to use your own custom button (make sure to create it following the branding guidlines) which calls grantOfflineAccess.For "old" users that come to your website again calls
gapi.auth2.init will initalize an immediate sign-in flow which you can catch with the isSignedIn listener to check for existing credentials as before (just in case you lost them).
For "new" users the grantOfflineAccess flow will return a code which you can exchange as above, and at the same time authenticate the user on the client side as well (calling your isSignedIn listener).
I hope this answers some of the questions you have, feel free to comment if you have more :)





