Planet Chromium

February 23, 2018

Google Chrome Releases

Dev Channel Update for Desktop

The dev channel has been updated to 66.0.3350.0 for Mac, Linux, and Windows.


A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Abdul Syed
Google Chrome

by Abdul Syed (noreply@blogger.com) at February 23, 2018 04:06 PM

Stable Channel Update for Chrome OS

The Stable channel has been updated to 64.0.3282.167 / 64.0.3282.169 (Platform version: 10176.72.0 / 10176.73.0) for most Chrome OS devices. This build contains a number of bug fixes and security updates. Systems will be receiving updates over the next several days.


If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).


Kevin Bleicher
Google Chrome

by Kevin Bleicher (noreply@blogger.com) at February 23, 2018 10:22 AM

February 22, 2018

Google Chrome Releases

Dev Channel Update for Chrome OS

The Dev channel has been updated to 66.0.3350.3 (Platform version: 10425.0.0) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements. 

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser). 


Josafat Garcia
Google Chrome

by Josafat (noreply@blogger.com) at February 22, 2018 07:03 PM

Stable Channel Update for Desktop

The stable channel has been updated to 64.0.3282.186 for Mac, Linux, and Windows, which will roll out over the coming days/weeks.


A list of all changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.



Abdul Syed
Google Chrome

by Abdul Syed (noreply@blogger.com) at February 22, 2018 06:14 PM

February 21, 2018

Google Chrome Releases

Beta Channel Update for Chrome OS

The Beta channel has been updated to 65.0.3325.89 (Platform version: 10323.39.0) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements. 

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser). 


Bernie Thompson
Google Chrome

by Bernie Thompson (noreply@blogger.com) at February 21, 2018 05:34 PM

Chrome Beta for Android Update

Ladies and gentlemen, behold!  Chrome Beta 65 (65.0.3325.85) for Android has been released and is available in Google Play.  A partial list of the changes in this build is available in the Git log. Details on new features is available on the Chromium blog, and developers should check out our updates related to the web platform here.

If you find a new issue, please let us know by filing a bug. More information about Chrome for Android is available on the Chrome site.

Estelle Yomba
Google Chrome

by Estelle Yomba (noreply@blogger.com) at February 21, 2018 05:25 PM

Beta Channel Update for Desktop

The beta channel has been updated to 65.0.3325.88 for Mac, Linux, and Windows.


A full list of changes in this build is available in the log. Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.


Krishna Govind
Google Chrome

by Krishna Govind (noreply@blogger.com) at February 21, 2018 11:16 AM

February 20, 2018

Google Chrome Releases

Beta Channel Update for Chrome OS

The Beta channel has been updated to 65.0.3325.65 (Platform version: 10323.30.0) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements. 

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser). 


Bernie Thompson
Google Chrome

by Bernie Thompson (noreply@blogger.com) at February 20, 2018 05:18 PM

Dev Channel Update for Chrome OS

The Dev channel has been updated to 66.0.3344.0 (Platform version: 10403.0.0) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements. 

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser). 


Josafat Garcia
Google Chrome

by Josafat (noreply@blogger.com) at February 20, 2018 11:02 AM

February 15, 2018

Google Chrome Releases

Dev Channel Update for Desktop

The dev channel has been updated to 66.0.3346.8 for Mac and Linux, and 66.0.3346.8/.9 for Windows.


A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Abdul Syed
Google Chrome

by Abdul Syed (noreply@blogger.com) at February 15, 2018 12:52 PM

February 14, 2018

Google Chrome Releases

Chrome Beta for Android Update

Ladies and gentlemen, behold!  Chrome Beta 65 (65.0.3325.74) for Android has been released and is available in Google Play.  A partial list of the changes in this build is available in the Git log. Details on new features is available on the Chromium blog, and developers should check out our updates related to the web platform here.

If you find a new issue, please let us know by filing a bug. More information about Chrome for Android is available on the Chrome site.

Estelle Yomba
Google Chrome

by Estelle Yomba (noreply@blogger.com) at February 14, 2018 10:57 PM

Beta Channel Update for Desktop

The beta channel has been updated to 65.0.3325.73 for Mac, Linux, and Windows.


A full list of changes in this build is available in the log. Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.


Krishna Govind
Google Chrome

by Krishna Govind (noreply@blogger.com) at February 14, 2018 02:35 PM

Chromium Blog

Under the hood: How Chrome's ad filtering works

While most advertising on the web is respectful of user experience, over the years we've increasingly heard from our users that some advertising can be particularly intrusive. As we announced last June, Chrome will tackle this issue by removing ads from sites that do not follow the Better Ads Standards. We've previously discussed some of the details surrounding how Chrome protects users from intrusive ads, but as we approach the launch date of February 15, we wanted to go under the hood and discuss how this feature works in more detail.

What are the Better Ads Standards?
The Better Ads Standards are the result of public consumer research by the Coalition for Better Ads, an industry group focused on improving users' experience with online advertising. Over 40,000 internet users in North America and Europe participated in surveys where they were shown common ad experiences and asked to evaluate how intrusive the experiences were. The most intrusive ad experiences include prestitial ads (those full-page ads that block you from seeing the content on the page) and flashing animated ads. More details about the research and methodology can be found on the Coalition's website.

Although a few of the ad experiences that violate the Better Ads Standards are problems in the advertisement itself, the majority of problematic ad experiences are controlled by the site owner — such as high ad density or prestitial ads with countdown. This result led to the approach Chrome takes to protect users from many of the intrusive ad experiences identified by the Better Ads Standards: evaluate how well sites comply with the Better Ads Standards, inform sites of any issues encountered, provide the opportunity for sites to address identified issues, and remove ads from sites that continue to maintain a problematic ads experience.

Today, the Better Ads Standards consists of 12 ad experiences that research found to be particularly annoying to users. Image Source: Coalition for Better Ads


Evaluating sites for violations
Sites are evaluated by examining a sample of pages from the site. Depending on how many violations of the Better Ads Standards are found, the site will be evaluated as having a status of Passing, Warning, or Failing. The evaluation status of sites can be accessed via the Ad Experience Report API. Site owners can also see more detailed results, such as the specific violations of the Better Ads Standards that were found, via the Ad Experience Report in Google’s Search Console. From the Report site owners can also request that their site be re-reviewed after they have addressed the non-compliant ad experiences.

The Ad Experience Report in Google's Search Console allows site owners to see their overall site evaluation status, as well as the specifics of any violations identified on their site.


Filtering on sites at the network level
At a technical level, when a Chrome user navigates to a page, Chrome’s ad filter first checks if that page belongs to a site that fails the Better Ads Standards. If so, network requests on the page — such as those for JavaScript or images — are checked against a list of known ad-related URL patterns. If there is a match, Chrome will block the request, preventing the ad from displaying on the page. This set of patterns is based on the public EasyList filter rules, and includes patterns matching many ad providers including Google’s own ad platforms, AdSense and DoubleClick.

What this looks like in Chrome
Chrome will automatically block ads on sites that fail the Better Ads Standards, using the approach described above. When at least one network request has been blocked, Chrome will show the user a message indicating that ad blocking has occurred as well as an option to disable this setting by selecting “allow ads on this site.” For desktop users, the notification in Chrome's address bar will look similar to Chrome's existing pop-up blocker. Android users will see message in a small infobar at the bottom of their screen, and can tap on “details” to see more information and override the default setting.

Chrome will automatically block intrusive ads on sites that have been found to violate the Better Ads Standards, but users have the option to disable the feature by selecting “allow ads on this site.” 


Early results show positive progress for users
While the result of this action is that Chrome users will not see ads on sites that consistently violate the Better Ads Standards, our goal is not to filter any ads at all but to improve the experience for all web users. As of February 12, 42% of sites which were failing the Better Ads Standards have resolved their issues and are now passing. This is the outcome we are were hoping for — that sites would take steps to fix intrusive ads experiences themselves and benefit all web users. However, if a site continues to maintain non-compliant ad experiences 30 days after being notified of violations, Chrome will begin to block ads on that site.

We're encouraged by early results showing industry shifts away from intrusive ad experiences, and look forwarding to continued collaboration with the industry toward a future where Chrome's ad filtering technology will not be needed.

Posted by Chris Bentzel, Engineering Manager

by Chrome Blog (noreply@blogger.com) at February 14, 2018 04:02 AM

February 13, 2018

Google Chrome Releases

Stable Channel Update for Desktop

The stable channel has been updated to 64.0.3282.167 for Mac & Linux, and 64.0.3282.167/168 for Windows, which will roll out over the coming days/weeks.

Security Fixes and Rewards
Note: Access to bug details and links may be kept restricted until a majority of users are updated with a fix. We will also retain restrictions if the bug exists in a third party library that other projects similarly depend on, but haven’t yet fixed.


This update includes 1 security fix. Please see the Chrome Security Page for more information.

[$N/A][806388] High CVE-2018-6056: Incorrect derived class instantiation in V8. Reported by lokihardt of Google Project Zero on 2018-01-26



A list of all changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.


Abdul Syed
Google Chrome

by Abdul Syed (noreply@blogger.com) at February 13, 2018 04:04 PM

Google Chrome

Vice President

Starting on February 15, Chrome will stop showing all ads on sites that repeatedly display disruptive ads after they’ve been flagged.

by Rahul Roy-Chowdhury at February 13, 2018 02:00 PM

February 12, 2018

Google Chrome Releases

Dev Channel Update for Chrome OS

The Dev channel has been updated to 65.0.3325.65 (Platform version: 10323.30.0) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements. 

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser). 


Bernie Thompson
Google Chrome

by Bernie Thompson (noreply@blogger.com) at February 12, 2018 04:52 PM

V8 JavaScript Engine

Lazy deserialization

TL;DR: Lazy deserialization was recently enabled by default in V8 version 6.4, reducing V8’s memory consumption by over 500 KB per browser tab on average. Read on to find out more!

Introducing V8 snapshots

But first, let’s take a step back and have a look at how V8 uses heap snapshots to speed up creation of new Isolates (which roughly correspond to a browser tab in Chrome). My colleague Yang Guo gave a good introduction on that front in his article on custom startup snapshots:

The JavaScript specification includes a lot of built-in functionality, from math functions to a full-featured regular expression engine. Every newly-created V8 context has these functions available from the start. For this to work, the global object (for example, the window object in a browser) and all the built-in functionality must be set up and initialized into V8’s heap at the time the context is created. It takes quite some time to do this from scratch.

Fortunately, V8 uses a shortcut to speed things up: just like thawing a frozen pizza for a quick dinner, we deserialize a previously-prepared snapshot directly into the heap to get an initialized context. On a regular desktop computer, this can bring the time to create a context from 40 ms down to less than 2 ms. On an average mobile phone, this could mean a difference between 270 ms and 10 ms.

To recap: snapshots are critical for startup performance, and they are deserialized to create the initial state of V8’s heap for each Isolate. The size of the snapshot thus determines the minimum size of the V8 heap, and larger snapshots translate directly into higher memory consumption for each Isolate.

A snapshot contains everything needed to fully initialize a new Isolate, including language constants (e.g., the undefined value), internal bytecode handlers used by the interpreter, built-in objects (e.g., String), and the functions installed on built-in objects (e.g., String.prototype.replace) together with their executable Code objects.

Startup snapshot size in bytes from 2016-01 to 2017-09. The x-axis shows V8 revision numbers.

Over the past two years, the snapshot has nearly tripled in size, going from roughly 600 KB in early 2016 to over 1500 KB today. The vast majority of this increase comes from serialized Code objects, which have both increased in count (e.g., through recent additions to the JavaScript language as the language specification evolves and grows); and in size (built-ins generated by the new CodeStubAssembler pipeline ship as native code vs. the more compact bytecode or minimized JS formats).

This is bad news, since we’d like to keep memory consumption as low as possible.

Lazy deserialization

One of the major pain points was that we used to copy the entire content of the snapshot into each Isolate. Doing so was especially wasteful for built-in functions, which were all loaded unconditionally but may never have ended up being used.

This is where lazy deserialization comes in. The concept is quite simple: what if we were to only deserialize built-in functions just before they were called?

A quick investigation of some of the most popular websites showed this approach to be quite attractive: on average, only 30% of all built-in functions were used, with some sites only using 16%. This looked remarkably promising, given that most of these sites are heavy JS users and these numbers can thus be seen as a (fuzzy) lower bound of potential memory savings for the web in general.

As we began working on this direction, it turned out that lazy deserialization integrated very well with V8’s architecture and there were only a few, mostly non-invasive design changes necessary to get up and running:

  1. Well-known positions within the snapshot. Prior to lazy deserialization, the order of objects within the serialized snapshot was irrelevant since we’d only ever deserialize the entire heap at once. Lazy deserialization must be able to deserialize any given built-in function on its own, and therefore has to know where it is located within the snapshot.
  2. Deserialization of single objects. V8’s snapshots were initially designed for full heap deserialization, and bolting on support for single-object deserialization required dealing with a few quirks such as non-contiguous snapshot layout (serialized data for one object could be interspersed with data for other objects) and so-called backreferences (which can directly reference objects previously deserialized within the current run).
  3. The lazy deserialization mechanism itself. At runtime, the lazy deserialization handler must be able to a) determine which code object to deserialize, b) perform the actual deserialization, and c) attach the serialized code object to all relevant functions.

Our solution to the first two points was to add a new dedicated built-ins area to the snapshot, which may only contain serialized code objects. Serialization occurs in a well-defined order and the starting offset of each Code object is kept in a dedicated section within the built-ins snapshot area. Both back-references and interspersed object data are disallowed.

Lazy built-in deserialization is handled by the aptly named DeserializeLazy built-in, which is installed on all lazy built-in functions at deserialization time. When called at runtime, it deserializes the relevant Code object and finally installs it on both the JSFunction (representing the function object) and the SharedFunctionInfo (shared between functions created from the same function literal). Each built-in function is deserialized at most once.

In addition to built-in functions, we have also implemented lazy deserialization for bytecode handlers. Bytecode handlers are code objects that contain the logic to execute each bytecode within V8’s Ignition interpreter. Unlike built-ins, they neither have an attached JSFunction nor a SharedFunctionInfo. Instead, their code objects are stored directly in the dispatch table into which the interpreter indexes when dispatching to the next bytecode handler. Lazy deserialization is similar as to built-ins: the DeserializeLazy handler determines which handler to deserialize by inspecting the bytecode array, deserializes the code object, and finally stores the deserialized handler in the dispatch table. Again, each handler is deserialized at most once.

Results

We evaluated memory savings by loading the top 1000 most popular websites using Chrome 65 on an Android device, with and without lazy deserialization.

On average, V8’s heap size decreased by 540 KB, with 25% of the tested sites saving more than 620 KB, 50% saving more than 540 KB, and 75% saving more than 420 KB.

Runtime performance (measured on standard JS benchmarks such as Speedometer, as well as a wide selection of popular websites) has remained unaffected by lazy deserialization.

Next steps

Lazy deserialization ensures that each Isolate only loads the built-in code objects that are actually used. That is already a big win, but we believe it is possible to go one step further and reduce the (built-in-related) cost of each Isolate to effectively zero.

We hope to bring you updates on this front later this year. Stay tuned!

Posted by Jakob Gruber (@schuay)

by Mathias Bynens (noreply@blogger.com) at February 12, 2018 02:08 AM

February 09, 2018

Google Chrome Releases

Dev Channel Update for Desktop

The dev channel has been updated to 66.0.3343.3 for Mac and Linux, and 66.0.3343.3/.4 for Windows.


A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Abdul Syed
Google Chrome

by Abdul Syed (noreply@blogger.com) at February 09, 2018 11:36 PM

February 08, 2018

Chromium Blog

Chrome 65 Beta: CSS Paint API and the ServerTiming API

Unless otherwise noted, changes described below apply to the newest Chrome Beta channel release for Android, Chrome OS, Linux, Mac, and Windows.

CSS Paint API
The CSS Paint API, also known as “CSS Custom Paint”, allows developers to programmatically generate an image whenever a CSS property expects one. Instead of referencing an image resource, developers can now use the new paint() function to reference a paint worklet that will draw the image. This API can be used for many things, including making the DOM tree smaller and transferring significantly less data compared to an image.

<style>
  textarea
{
   
background-image: paint(checkerboard);
 
}
</style>
<textarea></textarea>
<script>
  CSS
.paintWorklet.addModule('checkerboard.js');
</script>

To see the paint worklet in action, check out our explainer and the video demo below.


In this example, the CSS Paint API is used to programmatically create a checkerboard image.
Server Timing API
Developers interested in measuring the performance of their web applications have been able to use the Navigation Timing and Resource Timing APIs to request timing data for the document and its resources. Until now, there has been no way for the server to send any details about its response time to the client. The new Server Timing API allows web servers to pass performance timing information via HTTP headers to browsers. This new API provides developers a more complete performance picture that includes the speed of both the client and the server. For example, Chrome Developer Tools now shows server timing performance information via the Server Timing API.

Screenshot of the Chrome Developer Tools integration of the ServerTiming API.

Other features in this release

Blink > CSS


  • Developers can now use the :any-link pseudo-selector to apply CSS properties to all unvisited or visited hyperlink elements.
  • The syntax for specifying HSL/HSLA and RGB/RGBA coordinates for the color property now match the CSS Color 4 spec.
  • Developers can use display:contents to generate boxes for an element’s children and pseudo-elements without generating the parent box.

Blink > DOM



  • To complement assignedNodes(), the <slot> element now has an assignedElements() method, which returns only the element nodes assigned to a given slot.
  • Chrome now supports the HTMLAnchorElement.relList property to indicate the relationship between the resource represented by the <a> element and the current document. Thanks to Samsung for this contribution!

Blink > Feature Policy

Blink > Network

  • To match compatibility with the TLS spec, Chrome now supports the draft-23 version of the TLS 1.3 protocol.
  • Developers can use Request.destination to evaluate which resource their service worker is fetching.

Blink > Performance APIs

  • As WebIDL was deprecated, PerformanceResourceTiming, PerformanceLongTaskTiming, and TaskAttributionTiming now support the toJSON method to convert objects to JSON.

Blink > Security

  • To protect users against cross-origin information leakage, Chrome will ignore the presence of the download attribute on anchor elements with cross-origin attributes.

Deprecations and interoperability improvements

Blink > Bindings

  • To match compatibility with the HTML spec, document.all is no longer overwritable.

Blink > Network

  • As previously announced, Chrome 65 will not trust certificates issued from Symantec’s Legacy PKI after December 1st, 2017, and will result in interstitials. This will only affect site operators who explicitly opted-out of the transition from Symantec’s Legacy PKI to DigiCert’s new PKI, and does not apply to the previously disclosed independent sub-CAs from this infrastructure.
For a complete list of all features (including experimental features) in this release, see the Chrome 65 milestone hotlist.

Posted by Ian Kilpatrick, Patiently Painting Engineer

by Chrome Blog (noreply@blogger.com) at February 08, 2018 12:10 PM

Google Chrome Releases

Beta Channel Update for Desktop

The Chrome team is excited to announce the promotion of Chrome 65 to the beta channel for Windows, Mac and Linux. Chrome 65.0.3325.51 contains our usual under-the-hood performance and stability tweaks, but there are also some cool new features to explore - please head to the Chromium blog to learn more!


A full list of changes in this build is available in the log. Interested in switching release channels?  Find out how here. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.


Krishna Govind
Google Chrome

by Krishna Govind (noreply@blogger.com) at February 08, 2018 12:06 PM

Chromium Blog

A secure web is here to stay

For the past several years, we’ve moved toward a more secure web by strongly advocating that sites adopt HTTPS encryption. And within the last year, we’ve also helped users understand that HTTP sites are not secure by gradually marking a larger subset of HTTP pages as “not secure”. Beginning in July 2018 with the release of Chrome 68, Chrome will mark all HTTP sites as “not secure”.


In Chrome 68, the omnibox will display “Not secure” for all HTTP pages.

Developers have been transitioning their sites to HTTPS and making the web safer for everyone. Progress last year was incredible, and it’s continued since then:

  • Over 68% of Chrome traffic on both Android and Windows is now protected
  • Over 78% of Chrome traffic on both Chrome OS and Mac is now protected
  • 81 of the top 100 sites on the web use HTTPS by default
Chrome is dedicated to making it as easy as possible to set up HTTPS. Mixed content audits are now available to help developers migrate their sites to HTTPS in the latest Node CLI version of Lighthouse, an automated tool for improving web pages. The new audit in Lighthouse helps developers find which resources a site loads using HTTP, and which of those are ready to be upgraded to HTTPS simply by changing the subresource reference to the HTTPS version.

Lighthouse is an automated developer tool for improving web pages.

Chrome’s new interface will help users understand that all HTTP sites are not secure, and continue to move the web towards a secure HTTPS web by default. HTTPS is easier and cheaper than ever before, and it unlocks both performance improvements and powerful new features that are too sensitive for HTTP. Developers, check out our set-up guides to get started.

Posted by Emily Schechter, Chrome Security Product Manager

by Chrome Blog (noreply@blogger.com) at February 08, 2018 10:00 AM

Google Chrome Releases

Dev Channel Update for Chrome OS

The Dev channel has been updated to 65.0.3325.56 (Platform version: 10323.21.0) for most Chrome OS devices. This build contains a number of bug fixes, security updates and feature enhancements. 

If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser). 


Bernie Thompson
Google Chrome

by Bernie Thompson (noreply@blogger.com) at February 08, 2018 08:16 AM

February 07, 2018

Igalia Chromium

Andy Wingo: design notes on inline caches in guile

Ahoy, programming-language tinkerfolk! Today's rambling missive chews the gnarly bones of "inline caches", in general but also with particular respect to the Guile implementation of Scheme. First, a little intro.

inline what?

Inline caches are a language implementation technique used to accelerate polymorphic dispatch. Let's dive in to that.

By implementation technique, I mean that the technique applies to the language compiler and runtime, rather than to the semantics of the language itself. The effects on the language do exist though in an indirect way, in the sense that inline caches can make some operations faster and therefore more common. Eventually inline caches can affect what users expect out of a language and what kinds of programs they write.

But I'm getting ahead of myself. Polymorphic dispatch literally means "choosing based on multiple forms". Let's say your language has immutable strings -- like Java, Python, or Javascript. Let's say your language also has operator overloading, and that it uses + to concatenate strings. Well at that point you have a problem -- while you can specify a terse semantics of some core set of operations on strings (win!), you can't choose one representation of strings that will work well for all cases (lose!). If the user has a workload where they regularly build up strings by concatenating them, you will want to store strings as trees of substrings. On the other hand if they want to access characterscodepoints by index, then you want an array. But if the codepoints are all below 256, maybe you should represent them as bytes to save space, whereas maybe instead as 4-byte codepoints otherwise? Or maybe even UTF-8 with a codepoint index side table.

The right representation (form) of a string depends on the myriad ways that the string might be used. The string-append operation is polymorphic, in the sense that the precise code for the operator depends on the representation of the operands -- despite the fact that the meaning of string-append is monomorphic!

Anyway, that's the problem. Before inline caches came along, there were two solutions: callouts and open-coding. Both were bad in similar ways. A callout is where the compiler generates a call to a generic runtime routine. The runtime routine will be able to handle all the myriad forms and combination of forms of the operands. This works fine but can be a bit slow, as all callouts for a given operator (e.g. string-append) dispatch to a single routine for the whole program, so they don't get to optimize for any particular call site.

One tempting thing for compiler writers to do is to effectively inline the string-append operation into each of its call sites. This is "open-coding" (in the terminology of the early Lisp implementations like MACLISP). The advantage here is that maybe the compiler knows something about one or more of the operands, so it can eliminate some cases, effectively performing some compile-time specialization. But this is a limited technique; one could argue that the whole point of polymorphism is to allow for generic operations on generic data, so you rarely have compile-time invariants that can allow you to specialize. Open-coding of polymorphic operations instead leads to code bloat, as the string-append operation is just so many copies of the same thing.

Inline caches emerged to solve this problem. They trace their lineage back to Smalltalk 80, gained in complexity and power with Self and finally reached mass consciousness through Javascript. These languages all share the characteristic of being dynamically typed and object-oriented. When a user evaluates a statement like x = y.z, the language implementation needs to figure out where y.z is actually located. This location depends on the representation of y, which is rarely known at compile-time.

However for any given reference y.z in the source code, there is a finite set of concrete representations of y that will actually flow to that call site at run-time. Inline caches allow the language implementation to specialize the y.z access for its particular call site. For example, at some point in the evaluation of a program, y may be seen to have representation R1 or R2. For R1, the z property may be stored at offset 3 within the object's storage, and for R2 it might be at offset 4. The inline cache is a bit of specialized code that compares the type of the object being accessed against R1 , in that case returning the value at offset 3, otherwise R2 and offset r4, and otherwise falling back to a generic routine. If this isn't clear to you, Vyacheslav Egorov write a fine article describing and implementing the object representation optimizations enabled by inline caches.

Inline caches also serve as input data to later stages of an adaptive compiler, allowing the compiler to selectively inline (open-code) only those cases that are appropriate to values actually seen at any given call site.

but how?

The classic formulation of inline caches from Self and early V8 actually patched the code being executed. An inline cache might be allocated at address 0xcabba9e5 and the code emitted for its call-site would be jmp 0xcabba9e5. If the inline cache ended up bottoming out to the generic routine, a new inline cache would be generated that added an implementation appropriate to the newly seen "form" of the operands and the call-site. Let's say that new IC (inline cache) would have the address 0x900db334. Early versions of V8 would actually patch the machine code at the call-site to be jmp 0x900db334 instead of jmp 0xcabba6e5.

Patching machine code has a number of disadvantages, though. It inherently target-specific: you will need different strategies to patch x86-64 and armv7 machine code. It's also expensive: you have to flush the instruction cache after the patch, which slows you down. That is, of course, if you are allowed to patch executable code; on many systems that's impossible. Writable machine code is a potential vulnerability if the system may be vulnerable to remote code execution.

Perhaps worst of all, though, patching machine code is not thread-safe. In the case of early Javascript, this perhaps wasn't so important; but as JS implementations gained parallel garbage collectors and JS-level parallelism via "service workers", this becomes less acceptable.

For all of these reasons, the modern take on inline caches is to implement them as a memory location that can be atomically modified. The call site is just jmp *loc, as if it were a virtual method call. Modern CPUs have "branch target buffers" that predict the target of these indirect branches with very high accuracy so that the indirect jump does not become a pipeline stall. (What does this mean in the face of the Spectre v2 vulnerabilities? Sadly, God only knows at this point. Saddest panda.)

cry, the beloved country

I am interested in ICs in the context of the Guile implementation of Scheme, but first I will make a digression. Scheme is a very monomorphic language. Yet, this monomorphism is entirely cultural. It is in no way essential. Lack of ICs in implementations has actually fed back and encouraged this monomorphism.

Let us take as an example the case of property access. If you have a pair in Scheme and you want its first field, you do (car x). But if you have a vector, you do (vector-ref x 0).

What's the reason for this nonuniformity? You could have a generic ref procedure, which when invoked as (ref x 0) would return the field in x associated with 0. Or (ref x 'foo) to return the foo property of x. It would be more orthogonal in some ways, and it's completely valid Scheme.

We don't write Scheme programs this way, though. From what I can tell, it's for two reasons: one good, and one bad.

The good reason is that saying vector-ref means more to the reader. You know more about the complexity of the operation and what side effects it might have. When you call ref, who knows? Using concrete primitives allows for better program analysis and understanding.

The bad reason is that Scheme implementations, Guile included, tend to compile (car x) to much better code than (ref x 0). Scheme implementations in practice aren't well-equipped for polymorphic data access. In fact it is standard Scheme practice to abuse the "macro" facility to manually inline code so that that certain performance-sensitive operations get inlined into a closed graph of monomorphic operators with no callouts. To the extent that this is true, Scheme programmers, Scheme programs, and the Scheme language as a whole are all victims of their implementations. JavaScript, for example, does not have this problem -- to a small extent, maybe, yes, performance tweaks and tuning are always a thing but JavaScript implementations' ability to burn away polymorphism and abstraction results in an entirely different character in JS programs versus Scheme programs.

it gets worse

On the most basic level, Scheme is the call-by-value lambda calculus. It's well-studied, well-understood, and eminently flexible. However the way that the syntax maps to the semantics hides a constrictive monomorphism: that the "callee" of a call refer to a lambda expression.

Concretely, in an expression like (a b), in which a is not a macro, a must evaluate to the result of a lambda expression. Perhaps by reference (e.g. (define a (lambda (x) x))), perhaps directly; but a lambda nonetheless. But what if a is actually a vector? At that point the Scheme language standard would declare that to be an error.

The semantics of Clojure, though, would allow for ((vector 'a 'b 'c) 1) to evaluate to b. Why not in Scheme? There are the same good and bad reasons as with ref. Usually, the concerns of the language implementation dominate, regardless of those of the users who generally want to write terse code. Of course in some cases the implementation concerns should dominate, but not always. Here, Scheme could be more flexible if it wanted to.

what have you done for me lately

Although inline caches are not a miracle cure for performance overheads of polymorphic dispatch, they are a tool in the box. But what, precisely, can they do, both in general and for Scheme?

To my mind, they have five uses. If you can think of more, please let me know in the comments.

Firstly, they have the classic named property access optimizations as in JavaScript. These apply less to Scheme, as we don't have generic property access. Perhaps this is a deficiency of Scheme, but it's not exactly low-hanging fruit. Perhaps this would be more interesting if Guile had more generic protocols such as Racket's iteration.

Next, there are the arithmetic operators: addition, multiplication, and so on. Scheme's arithmetic is indeed polymorphic; the addition operator + can add any number of complex numbers, with a distinction between exact and inexact values. On a representation level, Guile has fixnums (small exact integers, no heap allocation), bignums (arbitrary-precision heap-allocated exact integers), fractions (exact ratios between integers), flonums (heap-allocated double-precision floating point numbers), and compnums (inexact complex numbers, internally a pair of doubles). Also in Guile, arithmetic operators are a "primitive generics", meaning that they can be extended to operate on new types at runtime via GOOPS.

The usual situation though is that any particular instance of an addition operator only sees fixnums. In that case, it makes sense to only emit code for fixnums, instead of the product of all possible numeric representations. This is a clear application where inline caches can be interesting to Guile.

Third, there is a very specific case related to dynamic linking. Did you know that most programs compiled for GNU/Linux and related systems have inline caches in them? It's a bit weird but the "Procedure Linkage Table" (PLT) segment in ELF binaries on Linux systems is set up in a way that when e.g. libfoo.so is loaded, the dynamic linker usually doesn't eagerly resolve all of the external routines that libfoo.so uses. The first time that libfoo.so calls frobulate, it ends up calling a procedure that looks up the location of the frobulate procedure, then patches the binary code in the PLT so that the next time frobulate is called, it dispatches directly. To dynamic language people it's the weirdest thing in the world that the C/C++/everything-static universe has at its cold, cold heart a hash table and a dynamic dispatch system that it doesn't expose to any kind of user for instrumenting or introspection -- any user that's not a malware author, of course.

But I digress! Guile can use ICs to lazily resolve runtime routines used by compiled Scheme code. But perhaps this isn't optimal, as the set of primitive runtime calls that Guile will embed in its output is finite, and so resolving these routines eagerly would probably be sufficient. Guile could use ICs for inter-module references as well, and these should indeed be resolved lazily; but I don't know, perhaps the current strategy of using a call-site cache for inter-module references is sufficient.

Fourthly (are you counting?), there is a general case of the former: when you see a call (a b) and you don't know what a is. If you put an inline cache in the call, instead of having to emit checks that a is a heap object and a procedure and then emit an indirect call to the procedure's code, you might be able to emit simply a check that a is the same as x, the only callee you ever saw at that site, and in that case you can emit a direct branch to the function's code instead of an indirect branch.

Here I think the argument is less strong. Modern CPUs are already very good at indirect jumps and well-predicted branches. The value of a devirtualization pass in compilers is that it makes the side effects of a virtual method call concrete, allowing for more optimizations; avoiding indirect branches is good but not necessary. On the other hand, Guile does have polymorphic callees (generic functions), and call ICs could help there. Ideally though we would need to extend the language to allow generic functions to feed back to their inline cache handlers.

Finally, ICs could allow for cheap tracepoints and breakpoints. If at every breakable location you included a jmp *loc, and the initial value of *loc was the next instruction, then you could patch individual locations with code to run there. The patched code would be responsible for saving and restoring machine state around the instrumentation.

Honestly I struggle a lot with the idea of debugging native code. GDB does the least-overhead, most-generic thing, which is patching code directly; but it runs from a separate process, and in Guile we need in-process portable debugging. The debugging use case is a clear area where you want adaptive optimization, so that you can emit debugging ceremony from the hottest code, knowing that you can fall back on some earlier tier. Perhaps Guile should bite the bullet and go this way too.

implementation plan

In Guile, monomorphic as it is in most things, probably only arithmetic is worth the trouble of inline caches, at least in the short term.

Another question is how much to specialize the inline caches to their call site. On the extreme side, each call site could have a custom calling convention: if the first operand is in register A and the second is in register B and they are expected to be fixnums, and the result goes in register C, and the continuation is the code at L, well then you generate an inline cache that specializes to all of that. No need to shuffle operands or results, no need to save the continuation (return location) on the stack.

The opposite would be to call ICs as if their were normal procedures: shuffle arguments into fixed operand registers, push a stack frame, and when the IC returns, shuffle the result into place.

Honestly I am looking mostly to the simple solution. I am concerned about code and heap bloat if I specify to every last detail of a call site. Also maximum speed comes with an adaptive optimizer, and in that case simple lower tiers are best.

sanity check

To compare these impressions, I took a look at V8's current source code to see where they use ICs in practice. When I worked on V8, the compiler was entirely different -- there were two tiers, and both of them generated native code. Inline caches were everywhere, and they were gnarly; every architecture had its own implementation. Now in V8 there are two tiers, not the same as the old ones, and the lowest one is a bytecode interpreter.

As an adaptive optimizer, V8 doesn't need breakpoint ICs. It can always deoptimize back to the interpreter. In actual practice, to debug at a source location, V8 will patch the bytecode to insert a "DebugBreak" instruction, which has its own support in the interpreter. V8 also supports optimized compilation of this operation. So, no ICs needed here.

Likewise for generic type feedback, V8 records types as data rather than in the classic formulation of inline caches as in Self. I think WebKit's JavaScriptCore uses a similar strategy.

V8 does use inline caches for property access (loads and stores). Besides that there is an inline cache used in calls which is just used to record callee counts, and not used for direct call optimization.

Surprisingly, V8 doesn't even seem to use inline caches for arithmetic (any more?). Fair enough, I guess, given that JavaScript's numbers aren't very polymorphic, and even with a system with fixnums and heap floats like V8, floating-point numbers are rare in cold code.

The dynamic linking and relocation points don't apply to V8 either, as it doesn't receive binary code from the internet; it always starts from source.

twilight of the inline cache

There was a time when inline caches were recommended to solve all your VM problems, but it would seem now that their heyday is past.

ICs are still a win if you have named property access on objects whose shape you don't know at compile-time. But improvements in CPU branch target buffers mean that it's no longer imperative to use ICs to avoid indirect branches (modulo Spectre v2), and creating direct branches via code-patching has gotten more expensive and tricky on today's targets with concurrency and deep cache hierarchies.

Besides that, the type feedback component of inline caches seems to be taken over by explicit data-driven call-site caches, rather than executable inline caches, and the highest-throughput tiers of an adaptive optimizer burn away inline caches anyway. The pressure on an inline cache infrastructure now is towards simplicity and ease of type and call-count profiling, leaving the speed component to those higher tiers.

In Guile the bounded polymorphism on arithmetic combined with the need for ahead-of-time compilation means that ICs are probably a code size and execution time win, but it will take some engineering to prevent the calling convention overhead from dominating cost.

Time to experiment, then -- I'll let y'all know how it goes. Thoughts and feedback welcome from the compilerati. Until then, happy hacking :)

by Andy Wingo at February 07, 2018 03:14 PM

February 06, 2018

Google Chrome Releases

Chrome Beta for Android Update

Ladies and gentlemen, behold!  Chrome Beta 65 (65.0.3325.53) for Android has been released and is available in Google Play.  A partial list of the changes in this build is available in the Git log. Details on new features is available on the Chromium blog, and developers should check out our updates related to the web platform here.

If you find a new issue, please let us know by filing a bug. More information about Chrome for Android is available on the Chrome site.

Estelle Yomba
Google Chrome

by Estelle Yomba (noreply@blogger.com) at February 06, 2018 10:14 PM

Dev Channel Update for Desktop

The dev channel has been updated to 65.0.3325.51 for Mac and Linux, and 65.0.3325.51/.52 for Windows.


A partial list of changes is available in the log. Interested in switching release channels? Find out how. If you find a new issue, please let us know by filing a bug. The community help forum is also a great place to reach out for help or learn about common issues.

Krishna Govind
Google Chrome

by Krishna Govind (noreply@blogger.com) at February 06, 2018 11:48 AM

February 05, 2018

Google Chrome Releases

Stable Channel Update for Chrome OS



The Stable channel has been updated to 64.0.3282.144 (Platform version: 10176.68.0) for most* Chrome OS devices. This build contains a number of bug fixes and security updates. Systems will be receiving updates over the next several days.


If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).


Kevin Bleicher

Google Chrome


*Devices with the Play Store will be rolling out over the next few days.


by Kevin Bleicher (noreply@blogger.com) at February 05, 2018 04:13 PM

Stable Channel Update for Chrome OS



The Stable channel has been updated to 64.0.3282.134 (Platform version: 10176.65.0) for most Chrome OS devices. This build contains a number of bug fixes and security updates. Systems will be receiving updates over the next several days.

New Features

  • Take screenshots faster on Chromebooks with a 360-degree hinge by pressing the power and volume down buttons at the same time 
  • Revamped Intent Picker for Play Applications (Same window by default with override) 
  • Lockscreen Performance Improvements 
  • Enable VPN for Google Play Apps 
  • Enhancements to our protected media pipeline for Android 
  • Android Container Auto Update Optimizations 
  • Touchscreen pairing settings 


Security Fixes
This release contains additional browser mitigations against speculative side-channel attack techniques.


If you find new issues, please let us know by visiting our forum or filing a bug. Interested in switching channels? Find out how. You can submit feedback using ‘Report an issue...’ in the Chrome menu (3 vertical dots in the upper right corner of the browser).


Kevin Bleicher
Google Chrome

by Kevin Bleicher (noreply@blogger.com) at February 05, 2018 10:49 AM