Performance Archives - Microsoft Edge Blog https://blogs.windows.com/msedgedev/tag/performance/ Official blog of the Microsoft Edge Web Platform Team Tue, 09 Dec 2025 16:57:51 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.4 https://winblogs.thesourcemediaassets.com/sites/33/2021/06/cropped-browser-icon-logo-32x32.jpg Performance Archives - Microsoft Edge Blog https://blogs.windows.com/msedgedev/tag/performance/ 32 32 Making complex web apps faster https://blogs.windows.com/msedgedev/2025/12/09/making-complex-web-apps-faster/ https://blogs.windows.com/msedgedev/2025/12/09/making-complex-web-apps-faster/#respond Tue, 09 Dec 2025 16:57:51 +0000 https://blogs.windows.com/msedgedev/?p=26139 On the web, speed is everything. The responsiveness of your browser, the time it takes for a web app to appear, and how quickly that app handles user interactions all directly impact your experience as a web user.

At Microsoft, we care deeply about

The post Making complex web apps faster appeared first on Microsoft Edge Blog.

]]>
  • Within the browser itself, by making Edge faster and more responsive.
  • Under the hood, by making the browser engine run complex web apps faster.
  • And finally, by helping web developers build faster web apps.
  • Based on our own experience, we know that complex applications require complex architectures that sometimes rely on multiple windows, iframes, or worker threads. To deal with the slowdowns that these multiple parallel contexts can introduce, we're proposing a new feature for web developers: the Delayed Message Timing API. If you're a web developer, continue reading to learn more about the Delayed Message Timing API proposal, and let us know if it might help you make your own web app faster, or share ways in which the API could be better.

    What causes delays in cross-context messaging?

    Delays can occur when an app exchanges a lot of messages between its various contexts, such as the app's main window, worker threads, or iframes. If those messages get queued and are not processed promptly, delays occur. These delays can degrade the user experience by making the application feel unresponsive. While it's easy to witness the delay, identifying its root cause is challenging with current development tools. Let's review the three types of delays which can occur when exchanging messages between contexts with the postMessage() API, and how the Delayed Message Timing API can help diagnose their root cause.

    Slowdown 1 – The receiving context is busy

    As the following diagram illustrates, the context to which you're sending a message might be processing a long synchronous task, effectively blocking its thread, causing your message to be queued up before it can be processed: Diagram showing two web app contexts (a main document and worker thread). The main document sends a message to the worker thread, but that thread is blocked on a long task and the message gets delayed. To understand if the receiver of the message is busy with other tasks, you need to know how long the message was blocked. To do this, the Delayed Message Timing API introduces the blockedDuration property, which represents the amount of time a message had to wait in the queue before being processed.

    Slowdown 2 – The task queue is congested

    Another possible reason for cross-document messaging slowdowns is when the task queue of a context is overloaded with many short tasks. In a webpage's main thread, this can often happen when the queue is saturated with high-priority tasks such as user interactions, network handling, and other internal system overhead tasks like navigation, loading, and rendering. In a worker, congestion can occur when many messages are posted in a short period of time. In both cases, tasks or messages arrive faster than they can be processed, creating a backlog that delays subsequent messages, including those that might be time sensitive. Although each individual task isn't long, together, they accumulate and cause congestion, which effectively acts like a single long task. Diagram showing two web app contexts (a main document and worker thread). The main document sends many messages to the worker thread, and each takes a little bit of time to process in that thread, leading, over time, to a longer and longer message blocked duration on the worker thread. To help diagnose this situation, the Delayed Message Timing API introduces the taskCount and scriptTaskCount properties, to show how many tasks were blocking the message.

    Slowdown 3 – Serialization and deserialization overhead

    Before crossing the boundaries between contexts, messages must be serialized and then deserialized again when received. These operations occur synchronously on the same threads where the messages are sent and received. Serializing and deserializing messages can therefore introduce noticeable overhead, particularly when sending a lot of data over postMessage(). Diagram showing two web app contexts (a main document and worker thread). The main document sends a message to the worker thread, but because the message contains a lot of data it takes time to serialize and then deserialize, leading to a long blocked duration. While the serialization and deserialization operations are internal to the browser and you can't change them, the Delayed Message Timing API provides the serialization and deserialization properties to accurately measure their duration.

    Using the Delayed Message Timing API

    The API will work with windows, tabs, iframes, or workers, and will cover cross-document messaging, cross-worker/document messaging, channel messaging, and broadcast channels. For a complete round-trip timing analysis, you'll need to correlate the Performance entries that you collect from both the sender and receiver contexts. To learn more, check out the explainer. The following code snippet shows how to use the proposed API:
    // Create a PerformanceObserver instance.
    const observer = new PerformanceObserver((list) => {
      console.log(list.getEntries());
    });
    
    // Start observing "delayed-message" Performance entries.
    observer.observe({type: 'delayed-message', buffered: true});
    And here is an example of the properties available on the corresponding "delayed-message" Performance entry:
    {
        "name": "delayed-message",
        "entryType": "delayed-message",
        "startTime": 154.90000009536743,
        "duration": 169,
        "traceId": 4,
        // The type of message-passing event.
        "messageType": "cross-worker-document",
        // The timestamp for when the message was added to the task queue.
        "sentTime": 155,
        // The timestamps for when the receiving context started and stopped
        // processing the message.
        "processingStart": 274.90000009536743,
        "processingEnd": 324.7000000476837,
        // The time the message spent waiting in the receiver's task queue.
        "blockedDuration": 119.90000009536743,
        // The time needed to serialize and deserialize the message.
        "serialization": 0,
        "deserialization": 0,
        // The number of queued tasks blocking the postMessage event.
        "taskCount": 38,
        // The number of entry-point JavaScript tasks, including those with
        // a duration lower than 5ms.
        "scriptTaskCount": 2,
        // The time needed to run all script.
        "totalScriptDuration": 119,
         // The list of PerformanceScriptTiming instances that contribute to the
         // delay.
        "scripts": [
            {
                "name": "script",
                "entryType": "script",
                "startTime": 154.90000009536743,
                "duration": 119,
                "invoker": "DedicatedWorkerGlobalScope.onmessage",
                "invokerType": "event-listener",
                "windowAttribution": "other",
                "executionStart": 154.90000009536743,
                "forcedStyleAndLayoutDuration": 0,
                "pauseDuration": 0,
                "sourceURL": "...",
                "sourceFunctionName": "runLongTaskOnWorker",
                "sourceCharPosition": 267
            }
        ],
        // The PerformanceMessageScriptInfo instance which provides details
        // about the script that sent the message.
        "invoker": {
            "name": "invoker",
            "sourceURL": "...",
            "sourceFunctionName": "sendMessage",
            "sourceCharPosition": 531,
            "sourceColumnNumber": 14,
            "sourceLineNumber": 13,
            "executionContext": {
                "name": "",
                "type": "window",
                "id": 0
            }
        },
        // The PerformanceMessageScriptInfo instance which provides details 
        // about the script that handled (or is handling) the message.
        "receiver": {
            "name": "receiver",
            "sourceURL": "...",
            "sourceFunctionName": "runLongTaskOnWorker",
            "sourceCharPosition": 267,
            "sourceColumnNumber": 41,
            "sourceLineNumber": 9,
            "executionContext": {
                "name": "",
                "type": "dedicated-worker",
                "id": 1
            }
        }
    }

    Let us know what you think

    The Delayed Message Timing API is in its early stages, and we'd love to hear your feedback about this proposal. There may be additional scenarios where cross-context slowdowns occur in your apps today and sharing your experiences with us will help us design the right API for you. Take a look at our proposal and let us know your feedback by opening a new issue on the MSEdgeExplainers repo.]]>
    https://blogs.windows.com/msedgedev/2025/12/09/making-complex-web-apps-faster/feed/ 0
    Microsoft Edge sets a new standard for speed and responsiveness https://blogs.windows.com/msedgedev/2025/07/07/microsoft-edge-sets-a-new-standard-for-speed-and-responsiveness/ https://blogs.windows.com/msedgedev/2025/07/07/microsoft-edge-sets-a-new-standard-for-speed-and-responsiveness/#respond Mon, 07 Jul 2025 16:12:52 +0000 https://blogs.windows.com/msedgedev/?p=25961 Microsoft Edge is continuing to blaze a trail toward an even faster and more responsive browser UI, and through a cumulation of efforts of the past few months, we have reached a major milestone of achieving a global First Contentful Paint (FCP) below

    The post Microsoft Edge sets a new standard for speed and responsiveness appeared first on Microsoft Edge Blog.

    ]]>
    1. We set this ambitious performance target because industry research shows that waiting longer than 300 to 400ms for the initial content can significantly impact user satisfaction. By meeting this critical threshold, we ensure that the most used browser features appear almost instantly, letting you engage with the content sooner. This achievement not only aligns with widely recognized web performance standards but also underscores our commitment to delivering industry-leading speed. With Microsoft Edge, you get a smooth, enjoyable online experience with minimal delays, faster access to content, and a real sense of instant responsiveness. Since our previous blog post, we have dramatically reduced load times by an average of 40%, and achieved greater responsiveness, for 13 browser features, such as:
    • Settings: you can now more quickly load and navigate to the browser settings and customize your browsing experience.
    • Read aloud: to experience AI-powered reading of webpages in more languages, accents, and voices—all with reduced startup time and smoother playback.
    • Split screen: effortlessly switch between tasks and windows with near-instant navigation and less loading delays.
    • Workspaces: from the moment you open a page, tasks feel more responsive and intuitive, allowing you to dive into your work without delay.
    To get a better sense for the improvements, here is a video showing how much faster the Settings UI now loads in Microsoft Edge: https://www.youtube.com/watch?v=86gXSSX4_w0

    Looking ahead

    While we're proud of these advancements, our work is far from finished. In the coming months, expect additional performance improvements across more features, including Print Preview, Extensions, and more. Every update is designed to ensure a browsing experience that remains fast, fluid, and enjoyable—whether you're working, playing, or exploring online. We'd love you to try Microsoft Edge and let us know what you think. Tell us about your experience by sending feedback directly from Edge: go to Settings and more (...) > Help and feedback > Send feedback. Happy browsing! 1 FCP measures how quickly the various feature UIs of Microsoft Edge visually load.]]>
    https://blogs.windows.com/msedgedev/2025/07/07/microsoft-edge-sets-a-new-standard-for-speed-and-responsiveness/feed/ 0
    Request for developer feedback: controlling the performance of embedded web content https://blogs.windows.com/msedgedev/2025/03/06/request-for-feedback-controlling-performance-of-embedded-content/ Thu, 06 Mar 2025 17:00:26 +0000 https://blogs.windows.com/msedgedev/?p=25715 We'd love to hear your feedback about a new feature proposal that we think will give you more control over the performance of y

    The post Request for developer feedback: controlling the performance of embedded web content appeared first on Microsoft Edge Blog.

    ]]>
    new feature proposal that we think will give you more control over the performance of your website or native application, by constraining the performance impact of any web content that you might embed. Illustration of a browser window, displaying a webpage that embeds some external content in an iframe When it comes to optimizing the performance of your app, you're often limited by the performance of the content your app embeds. Embedded content may be third-party iframes for example, but it can also be shared components or apps from other teams within your organization. A common case is when an application gets embedded and starts causing performance problems because it wasn't originally designed for embedded scenarios. Being able to minimize the performance impact of the content you embed is crucial to improving the overall performance of your site or app.

    Goals

    With this new feature proposal, we aim to do two things:
    • To make it possible for you to control the performance impact of the content you embed, and make it easy to do so, without having to determine exactly the individual constraints that are needed.
    • And to make it possible for you to know when performance violations occur, so that you can understand when the user experience is negatively impacted by embedded content and improve the experience.

    Proposal

    We're proposing to achieve the above goals by introducing new Document Policy configurations and by reporting the violations to the embedder, so you can make decisions accordingly, and to the embedded content so they can also be aware of issues and mitigate them. Here are the document policies we're proposing to add for developers to apply to embedded content:
    • basic - Basic web development best practices that are scenario agnostic This category encompasses fundamental web development best practices to ensure websites are optimized for performance across various environments. These tasks are typically simple to implement but are frequently overlooked, leading to significant performance issues. Constraints include limits on oversized assets (images, web fonts, etc.), and flagging unzipped assets and uncompressed resources.
    • early-script - JavaScript constraints to enhance performance and minimize impact on user experience before interaction begins This category focuses on JavaScript development best practices that can be done to minimize performance issues before user interaction begins. This includes capping JavaScript resources loaded initially to avoid overwhelming devices with limited processing power or bandwidth, serving JavaScript with constrained content-length headers, and requiring animations to run on the compositor thread.
    • globals - Overall media and system resource usage constraints This category entails imposing limits on overall media and system resource usage during interactions to help prevent websites from over-consuming resources and degrading user experiences. This includes capping total media usage, iframe count, iframe depth, and CPU usage before the first interaction.
    • script - Strict JavaScript restrictions while running/post-load This category enforces restrictions on more complex JavaScript to further enhance performance. This includes limiting long tasks running on the main thread and capping high CPU usage tasks, particularly those involving workers that exceed certain execution times.
    Violations will be reported through the Reporting API. Developers can also opt into letting the browser address the violations directly, for example by not rendering oversized assets, blocking out images that are too large, pausing/blocking loading of scripts that violate limits, etc.

    Example

    Imagine a complex app that embeds real-time content from different sources through iframes. The app has a weather widget that contains animations or high-definition videos which autoplay for different weather conditions without user interaction. Illustration of a browser window, showiung a webpage which includes multiple widgets, three of which appear to be external content that's embedded into the page. To minimize the performance impact of the embedded content, the developer of the host application aligns with producers of the embedded content on guidelines and best practices for the iframes to be loaded into the experience. While the producers of the embedded content make the necessary changes to meet the requirements, the host app serves its main document with Document Policy directives to enforce on the embedded content. In the simplest case, the app uses the basic policy to ensure oversized assets are limited and require assets to be zipped, by using the following Document Policy header: Require-Document-Policy: basic If the developer also wants to ensure that video and animations pause when the user is not interacting with them as well as apply some additional limits on JavaScript code, the developer can also set the early-script policy: Require-Document-Policy: basic, early-script Alternatively, the developer can choose to set these policies individually, per iframe: <iframe policy="basic"> <iframe policy="basic, early-script">

    Let us know what you think

    We're very excited about how this proposal can give you more control of your app's performance and help uncover issues to improve the overall user experience. We're looking for feedback on this proposal so if you're interested in this API and want to help us shape it, please read the explainer, and share any thoughts you have by opening a new issue on our repository.]]>
    More Edge features get a performance boost https://blogs.windows.com/msedgedev/2025/02/18/more-edge-features-get-a-performance-boost/ Tue, 18 Feb 2025 16:50:39 +0000 https://blogs.windows.com/msedgedev/?p=25703 Microsoft Edge is racing into the new year with faster and more responsive features than ever before.

    Starting with Microsoft Edge 132, many of the browser's most important features, such as Downloads, Drop, History, and the inPrivate new tab experi

    The post More Edge features get a performance boost appeared first on Microsoft Edge Blog.

    ]]>
    40% faster than before, on average. (This includes Favorites and Browser Essentials, which we had already mentioned in our earlier blog post: An even faster Microsoft Edge.) These improvements are all made possible thanks to our effort to migrate the browser UI to WebUI 2.0, our markup-first architecture that minimizes the size of our code bundles, and the amount of JavaScript code that runs during the initialization of the UI. As an example, the following video shows the improved Downloads UI experience: https://www.youtube.com/watch?v=WDU9kFhOE0Y We hope you like these responsiveness improvements. And we aren't done yet! Over the coming months we will continue shipping improvements to even more features of the browser, including to print preview, read aloud, settings, and more. We'd love you to try Microsoft Edge and let us know what you think. Tell us about your experience by sending feedback directly from Edge: go to Settings and more () > Help and feedback > Send feedback.]]>
    An even faster Microsoft Edge https://blogs.windows.com/msedgedev/2024/05/28/an-even-faster-microsoft-edge/ Tue, 28 May 2024 16:00:55 +0000 https://blogs.windows.com/msedgedev/?p=25498 Over the past month, you may have noticed that some of Edge's features have become faster and more responsive. That's because Edge is on a journey to make all user interactions in the browser blazing fast starting with some of our newest features and

    The post An even faster Microsoft Edge appeared first on Microsoft Edge Blog.

    ]]>
    42% faster for Edge users and a whopping 76% faster for those of you on a device without an SSD or with less than 8GB RAM! https://www.youtube.com/watch?v=avJmgfGpoJA Favorites is another Edge feature that's getting UI responsiveness improvements in Edge 124. Whether favorites are expanded or collapsed, the experience should be 40% faster. And this is just the tip of the iceberg. Over the coming months we will continue to ship responsiveness improvements to many more Edge features including history, downloads, wallet and more. We'd love for you to try Microsoft Edge and let us know what you think. Tell us about your experience by using the feedback tool in Edge: click Settings and more (...) > Help and feedback > Send feedback. Read on for more details on how we made this all possible.

    Monitoring UI responsiveness

    Edge's UI responsiveness improvements started with understanding what you, our users, were experiencing. Edge monitors its UI responsiveness via telemetry collected from end users' machines.  We intentionally did this collection for all the parts of the Edge UI, not just for the web pages that we render.  What did we learn from this data?
    • Research indicates that there are certain absolute responsiveness targets that must be met for a user to perceive the UI as fast, and data showed our UI could be more responsive.
    •  We had an opportunity to improve responsiveness for lower resourced devices.
    We are constantly learning more about how we can improve the performance of the Edge UI and, by using this data, we discovered some areas of improvement.  For example, we observed that the bundles of code that many of our components used were too large. We realized that this was due to two main reasons:
    1. The way we organized the UI code in Edge wasn't modular enough. Teams who worked on different components shared common bundles even when that wasn't strictly necessary. This resulted in one part of the UI code slowing down another part by sharing things unnecessarily.
    2. A lot of our code was using a framework that relied on JavaScript to render the UI.  This is referred to as client-side rendering, which has been a popular trend amongst web developers over the past decade because it helped web developers be more productive and enabled more dynamic UI experiences.

    Rendering web UI like it was meant to be

    Why are we sharing this ancient news? After all, a lot of web pages have been rendering on the client-side for years. Well, it turns out that JavaScript must be downloaded, then run through a JIT compiler (even if you don't use it), and then executed, and all this must be done before any of the JavaScript can start rendering the UI. This introduces a lot of delay before users can see the UI, especially on low-end devices. If you turn back the time machine prior to the Web 2.0 era, the way web content was rendered was by using HTML and CSS.  This is often referred to as server-side rendering, as the client gets the content in a form that's ready to render. Modern browser engines are very fast at rendering this content so long as you don't let JavaScript get in the way. Based on this realization, our questions became:
    1. Could we maintain the developer productivity that JavaScript frameworks have given us while generating code that renders UI fast?
    2. Could the browser be its own best customer?
    3. How fast could we make things if we removed that framework and built our UI just by using the web platform?
    The answers to these questions are Yes, Yes, and Very Fast.

    Introducing WebUI 2.0

    The result of this exercise is an Edge internal project that we've called WebUI 2.0. In this project, we built an entirely new markup-first architecture that minimizes the size of our bundles of code, and the amount of JavaScript code that runs during the initialization path of the UI. This new internal UI architecture is more modular, and we now rely on a repository of web components that are tuned for performance on modern web engines.  We also came up with a set of web platform patterns that allow us to ship new browser features that stay within our markup-first architecture and that use optimal web platform capabilities. Browser Essentials is the first Edge feature which we converted to test the new architecture and to prove that the concept worked, especially on all types of devices. We're in the process of upgrading components of the Edge user interface to WebUI 2.0 and you can expect to see more features of the browser getting far more responsive over time. We hope that more websites start moving in this direction of markup-first, small bundles, and less UI-rendering JavaScript code. Over time, we plan on making some of our packages open source, so that all developers can benefit. Finally, as we continue improving WebUI 2.0, we're committed to finding opportunities to improve the web platform itself even more. We hope you enjoy this upgraded Edge experience!]]>
    Control Edge memory usage with resource controls https://blogs.windows.com/msedgedev/2024/05/02/control-edge-memory-usage-with-resource-controls/ Thu, 02 May 2024 16:35:31 +0000 https://blogs.windows.com/msedgedev/?p=25485 Boost your gaming experience even more with the new resource controls setting in Microsoft Edge 125!

    Thanks to efficiency mode, Edge already reduces h

    The post Control Edge memory usage with resource controls appeared first on Microsoft Edge Blog.

    ]]>
    resource controls setting in Microsoft Edge 125! Thanks to efficiency mode, Edge already reduces how much of your computer resources the browser uses while you play PC games. And now, starting with Microsoft Edge Beta 125 , if you want to have even more control over how much memory your browser uses, we're introducing the new resource controls setting to set how much RAM Edge can use.

    How to access resource controls

    To enable the new resource controls setting, make sure you have Microsoft Edge version 125 or later and go to Settings and more (...) > Settings > System and performance. Under the Manage your performance section, switch the toggle to enable resource controls: The Resource controls setting in Edge When you enable the setting, by default RAM usage is controlled only when you're PC gaming. If you want to limit Edge's RAM usage all the time, then select Always. Note that, depending on the limit you set, resource controls can affect your browser performance. When you set a limit for the memory Edge can use, your browser functions normally until that limit is hit. When Edge hits the limit, the browser will try to reduce its memory usage and you may notice increased page reloads with more tabs being slept and discarded. Setting a low limit may slow down your browser performance.

    How to keep an eye on browser performance

    To keep a close eye on your browser performance, use the Browser essentials sidebar in Microsoft Edge. The sidebar now also lets you monitor your newly set RAM limit. To open the Browser essentials sidebar, go to Settings and more (...) > Browser essentials: Browser essentials, showing the set RAM limit Browser essentials let you toggle efficiency mode and monitor the memory usage related to sleeping tabs. And now, if you have enabled the new resource controls setting, the RAM usage section will also appear. Note that there may be moments when the memory usage appears to be higher than your set limit. This is expected, Edge tries its best to keep usage below your set value but may not always be able to do so. Try the new resource controls setting and Browser essentials, and let us know what you think! If you have any feedback or suggestions for this feature, let us know by sending feedback in Edge: go to Settings and more (...) > Help and feedback > Send feedback.]]>
    Edge is faster than ever before on Macs with M2 https://blogs.windows.com/msedgedev/2024/02/02/edge-is-faster-than-ever-before-on-macs-with-m2/ https://blogs.windows.com/msedgedev/2024/02/02/edge-is-faster-than-ever-before-on-macs-with-m2/#respond Fri, 02 Feb 2024 17:00:06 +0000 https://blogs.windows.com/msedgedev/?p=25435 The performance at which Microsoft Edge renders webpages has always been a top priority for us. Recently, we have enabled Profile-Guided Optimizations (PGO) for Macs with M2, whi

    The post Edge is faster than ever before on Macs with M2 appeared first on Microsoft Edge Blog.

    ]]>
    Profile-Guided Optimizations (PGO) for Macs with M2, which has shown up to 20% improvements in key browser benchmarks. PGO is a compiler optimization technique that uses profiling to improve program runtime performance. We're also excited to announce that our Speedometer score is now over 500 on Macs with M2 (our tests were run on a Mac Mini with M2). Chart showing that Edge's Speedometer score was below 450 before PGO, and is above 500 after PGO. Other browser benchmarks also show improvements: Chart showing that Edge's MotionMark score was around 4000 before PGO, and is around 5000 after PGO. Chart showing that Edge's JetStream score was around 310 before PGO, and is around 350 after PGO. Many browsers use these benchmarking tools to measure how well they perform at running tasks that correlate to real-world user experiences.
    • Speedometer 2.1 determines the responsiveness of a browser, so it's a good benchmark to use for overall website performance.
    • MotionMark 1.2 is a graphical browser test suite that measures the rendering performance of complex webpages that have lots of graphics & animations, such as Excel Online.
    • And JetStream measures how quickly browsers can start & run code. It's a good measure for how responsive the browser is on code-intensive sites.
    We'd love for you to try Microsoft Edge on your Mac with M2 and let us know what you think! Are you getting over 500 at Speedometer on your device? Tell us about your experience by using the feedback tool in Microsoft Edge: click Settings and more (...) > Help and feedback > Send feedback.]]>
    https://blogs.windows.com/msedgedev/2024/02/02/edge-is-faster-than-ever-before-on-macs-with-m2/feed/ 0
    Collaborating with the Office Performance team for better web performance tools https://blogs.windows.com/msedgedev/2023/08/10/collaborating-office-performance-better-web-performance-tools/ Thu, 10 Aug 2023 16:01:02 +0000 https://blogs.windows.com/msedgedev/?p=25340 On the Microsoft Edge team, we spend a lot of time working with product teams across Microsoft to support them in building great web experiences, and jointly raising the bar for how these apps perform for everyone, whatever device they may use. In th

    The post Collaborating with the Office Performance team for better web performance tools appeared first on Microsoft Edge Blog.

    ]]>
  • The number of nodes in the DOM tree.
  • The number of stylesheets and CSS rules in the document.
  • The number of HTTP requests, and the total size of the transferred data on page load.
  • The more DOM nodes, styles, and other resources a web app needs, the more work the browser must do to load and update that app. Even a simple mouse movement on an app could trigger a lot of browser rendering work, which could take a long time to run. Microsoft builds web products like Outlook, Word, Excel, PowerPoint, and Teams, that are designed to help their users be productive and achieve more. These are powerful tools, which are correspondingly complex. Web applications like these may need more than 5000 CSS rules, create more than 2000 nodes in the DOM tree, and send dozens of HTTP requests on page load. This isn’t only the case for Microsoft products—other web-based messaging, videoconferencing, or office-type products tend to have the same level of complexity. In comparison, simpler static websites, such as personal blogs, are easily 10 times smaller. With apps this complex, developers can’t assume they will be fast by default. Building them to be fast, as well as keeping them fast, requires intentional effort and effective tools. These products have helped us gain deeper insights into real-world use cases that help us imagine and create better performance tools; which, in turn, can help them investigate confusing performance issues by getting deep into the browser’s codebase, which often leads to browser-level optimizations for all websites.

    Reducing CPU sampling overhead by 95%

    Earlier this year, the Office team was investigating the launch performance of the PowerPoint web app, which uses multiple out-of-process iframes. While doing so, they noticed significant CPU usage overhead while recording profiles in the Performance tool of Edge DevTools. In fact, in some instances, DevTools was seemingly responsible for saturating an engineer’s 10 core CPU while profiling. Using Event Tracing for Windows (ETW), the team determined that this was due to the way the CPU profiler in Chromium (the browser engine which Edge is built on) was doing its sampling. In summary, it was using a busy waiting approach which meant constantly running code even while waiting for the next event. ETW trace showing the unusual CPU usage caused by Chromium while profiling. Under normal circumstances, the overhead would have been negligible, but with the many processes that PowerPoint used, it was starting to become a problem. By using a more accurate sleep timing method, the team was able to reduce the CPU sampling overhead by 95% and decrease the total CPU consumption from Edge by 71% while profiling. Of course, this helped the PowerPoint team investigate and improve the load performance of their app, but this also means that everyone using the Performance tool in DevTools (in Edge or any Chromium browser) now has a much better experience.

    Simplified source maps everywhere you need them

    We're also working to improve the quality and reliability of the Performance tool with better support for source maps. Source maps are used in DevTools to map your production code (often bundled and minified) back to your original source code. While source maps have been available in the Sources tool for a while, we’ve added support for them in the Performance and Memory tools too. The Performance tool now automatically shows un-minified function and file names, which makes investigating slowdowns in your apps a lot easier. A before/after comparison of a Performance recording. Before: the flame chart in the tool shows minified function names. After: The function names make sense. On top of this, we've added support Azure Pipelines, which makes publishing source maps during your build process a lot simpler. You can now generate and publish your source maps to the symbol server and then securely retrieve them in DevTools. This makes getting source maps a lot easier in DevTools; all you need to do is either login to your Azure Active Directory from DevTools, or enter an ADO access token. Being able to see original function names in the Performance tool has already helped us many times. For example, it helped us spot several easy wins while investigating and improving the load time of PowerPoint. It would have been nearly impossible to find as many of these function calls as we did with a minified profile. When working on a complex codebase, there can be thousands of other minified functions calls in a performance profile, but seeing the original function names made it a lot easier to recognize bad patterns.

    Beyond JavaScript performance

    When performance improvements are needed, the first wins often come from working on the app’s JavaScript code. But JavaScript isn’t the only thing that a browser needs to run in order to render a webpage. One very important thing is rendering the pixels on the screen. This requires computing the styles of all elements, figuring out the layout of the page, and painting those pixels. JavaScript can still be a culprit here, as it’s often what causes the browser to re-render the page in the first place. But the speed at which the browser does render it depends on how many DOM nodes there are to re-render, how many CSS rules apply to them, and even sometimes on the complexity of the CSS selectors that apply to them (see The truth about CSS selector performance for more information). The Office Performance team was investigating ways to improve responsiveness in Microsoft Word. While doing so, they noticed a 75ms style recalculation event that was making the launch of the app slower than it could be. Style recalculation events are when the browser engine needs to figure out what just changed in the page, often as a result of a DOM mutation, then collect the CSS styles that apply to the changed elements and compute their values. the flame chart of a performance profile, showing an unusually long Recalculate Style event, caused by a JavaScript function. Thanks to source maps support in the Performance tool, it was obvious that the cause for this long style recalculation event was, in fact, a JavaScript function that checked if pasting text was supported. It was then easy to go back to the code and do the same check in a way that didn’t cause such a long style recalculation. It’s not always possible to avoid a style recalculation event or move it to a less inconvenient time. Sometimes, these events just need to happen and all we can do is try to make them happen as fast as possible. That’s why we recently shipped Selector Stats, a feature that helps you discover which CSS selectors may have a negative impact on the time it takes the browser to recalculate some styles. With Selector Stats, you get aggregate statistics for your CSS selectors. This means you can see the statistics for all the style recalculation events in a performance profile. Performance tool in Edge, showing a profile that contains multiple Recalculate Style events, and the Selector Stats table with stats for all of these events.

    A slew of memory improvements

    The previous improvements were all about helping make apps faster to load and to run. But we also spend a lot of time making sure our apps have as small a memory footprint as possible. And again, we work together across teams, to invent and implement the tools that can make our lives easier. Here are some of the improvements we’ve made to the DevTools’ Memory tool that have helped us already, and that may be useful to you too:
    • The Memory tool can now load much larger heap snapshot files: when memory gets out of control and a memory leak investigation is needed, it helps if the tool can deal with large amounts of data.
    • We also made recording large heap snapshots much faster. Generating large heap snapshots that weigh over 1GB is now 70% to 86% faster.
    • We made it possible to compare two heap snapshots by their retainer paths. For applications with a size and complexity that’s comparable to the Microsoft 365 web apps, this makes finding memory leaks much easier. You can now see what top level objects are growing in retained size.
    • Navigating large memory heap snapshots is also easier now with the ability to filter snapshopts by node types.

    In the end, it’s all about teamwork

    At the center of all these new DevTools features and Microsoft product improvements, we're working to leverage our close relationships with some of the industry's most powerful web apps to ensure that developer tooling is up to the task. Working across teams is a superpower that we very often use to overcome challenges. When you put your heads together, there really isn’t anything you can’t do. Performance and memory issue investigations, in particular, are hard to do on your own. So, to close, let me introduce one last DevTools feature that’s all about teamwork: Enhanced Traces. We made sharing performance and memory recordings with other people much better with Enhanced Traces. With Enhanced Traces, performance and memory recordings now preserve a lot more data than previously possible. These traces include more information, such as the messages in the console, the JavaScript code that was running on the page at the time of recording, and the DOM nodes. This means that if you record a performance profile on your own machine, and then export it as an Enhanced Trace, you can send the trace to a coworker, and when they open it on their machine, they’ll see what you see. The same DOM nodes, the same console messages, the same scripts, and obviously the same performance profile. Performance tool in Edge, showing the export Enhanced Trace menu item.We hope these features help you create faster and leaner web applications. As always, we value your feedback about Edge DevTools. If you have any thoughts, questions, feature requests, or problems related to these features, or anything else in DevTools, please file an issue on our GitHub repository. Happy debugging!]]>
    The truth about CSS selector performance https://blogs.windows.com/msedgedev/2023/01/17/the-truth-about-css-selector-performance/ Tue, 17 Jan 2023 17:02:21 +0000 https://blogs.windows.com/msedgedev/?p=25213 If you're a web developer, you may have already heard that some CSS selectors are faster than others. And you're probably hoping to find a list of the better selectors to use in this article.

    Well, not quite. But bear with me, I promise that by the

    The post The truth about CSS selector performance appeared first on Microsoft Edge Blog.

    ]]>
    A quick look behind the scenes The way you write CSS selectors does play a role in how browsers render your web pages. Whenever a part of your page changes, the browser engine that's running it needs to take a look at the new DOM tree, and figure how to style it based on the available CSS stylesheets. This operation of matching styles to DOM nodes is called a style recalculation. Without getting into a lot of details, the browser engine needs to look at all your rules and make decisions as to which ones apply to a given element. To do this, the engine needs to look at the rule selector, and this happens from right to left. For example, when the engine sees a selector like `.wrapper .section .title .link` it will try to match the `link` class with the element first, and if that matches, then go up the chain from right to left to find an ancestor element with class `title`, then one with class `section`, and finally one with class `wrapper`. This example illustrates that it's likely faster for the browser engine to match just `.link` than it is to match this longer `.wrapper .section .title .link` selector. There are just fewer things to check. Classes aren't the only type of identifiers you can use in your CSS selectors of course. One interesting example is using attribute selectors and do substring matching like `[class*="icon-"]`. This type of selector requires the browser engine to not only check if the element has a class attribute but also whether the value of this attribute contains the substring `icon-`. That's another example of how different ways of writing selectors may require more or less work for the engine to apply CSS rules.

    In practice, does it matter?

    Maybe. This heavily depends on the web page, the size of the DOM tree, the amount of CSS rules, and whether the DOM changes often. There's unfortunately no rule around this. In fact, talking about rules, as an industry, we like inventing rules for what's good and what's bad. Rules help us make quick decisions and guide us when writing code and designing software. But they can also blind us from what's really happening in our specific case. When it comes to writing CSS selectors, strictly applying rules, or using a linter to do it automatically, may actually be counter-productive in some cases. Overly complex CSS selectors, coupled with a huge DOM tree that changes a lot could very well lead to bad performance. But there's a balance. Over-indexing on theoretical rules and changing selectors just to please your linter and hope for better performance may just be making your CSS harder to read and maintain, for not much actual gains. So, write the code in a way that makes sense for your app, and is easy to read and maintain, and then measure the actual performance of your important user scenarios.

    Measure!

    Prefer measuring your key app scenarios over blindly applying a set of rules for how to write fast code. Know the tools at your disposal, and use them. Microsoft Edge DevTools has a Performance tool that can be a real eye opener when your app starts feeling slow. I want to emphasize the word feeling here. Build empathy for your users and use the devices they actually use if you can. Your development machine is likely much more powerful than your users' devices. In fact, one nice thing you can do with DevTools is slow down your CPU and network connection from within the tools directly. The Performance tool can look quite complicated, but we have documentation that should help. Also, everything happens in your browser only, so you can try things out without breaking anything, and you can always just reload the page and re-open DevTools if you get into trouble. Learn to use the tools available to measure your key scenarios, and learn to identify the biggest blocks that are making things slow. The Performance tool in Edge DevTools. The "Main" panel is expanded to show a flame chart in "Bottom-Up" sorting. If style recalculation is, indeed, one of the things that is making your app slow, then we've got good news for you. When it comes to investigating a performance issue you've zeroed in on, nothing beats having a tool that just gives you the root cause for it immediately.

    Selector stats to the rescue

    Starting with Microsoft Edge 109 the Performance tool in DevTools can list the most costly selectors in any style recalculation. Here's how to get it:
    1. Open the Performance tool.
    2. Open the tool’s settings by clicking the cog icon in the top-right corner.
    3. Check the Enable advanced rendering instrumentation (slow) option.
    4. Click Record, execute the scenario on the webpage that you want to improve, and then click Stop.
    5. In the recorded profile, identify a long style recalculation that you want to improve and select it in the waterfall view ("Main" section).
    6. In the bottom tab bar, click Selector Stats.
    DevTools now gives you the list of all the CSS selectors that got calculated by the browser engine during this recalculation operation. You can sort the selectors by the time they took to process or the number of times they matched. The Performance tool with the "Selector Stats" tab expanded, displaying a list of CSS selectors and time elapsed to calculate them. If you find a selector that required a long time to process, and was matched many times, that might be a good candidate to try and improve. Could the selector be simplified? Could it be made more specific to the elements it should match? This new feature makes it instant to go from a suspicious-looking style recalculation to the individual CSS selectors that are causing it to be that long. You can then go back to your source code, improve those particular selectors, and measure again.

    Case study

    To make things more practical, let's try to improve an actual webpage. We will use a photo gallery page built as a demo just for this. A sample gallery page showing a set of photos with descriptions and metadata for each photos, and a row of filter options at the top. This page has a toolbar at the top to filter photos by camera model, aperture, exposure time, etc. and switching between camera models feels a bit slow right now. Although this demo page was built just for this, it does show a case that's similar to what we encountered in our own products at Microsoft. The Edge team and other product teams at Microsoft who rely on the web platform collaborate closely in this area in order to create the best user experience. In certain specific scenarios, we were seeing unusually long style recalculations in apps that have a lot of DOM elements (like the demo page we’ll use here, which has around 5000 elements). Having access to the CSS selector stats tool helped us a lot. The scenario we'll be focusing on is the following:
    • Load the demo page, and wait for the filters to be ready.
    • Switch the camera model filter to another value and start the performance recording.
    • Switch back to all camera models and stop the recording.
    Switching back to all photos is slow so we’re measuring only that part. We’ll also slow down the CPU four times to have more realistic results than we’d normally get on a powerful development machine. Once the recording is ready, we can easily see a long style recalculation block in the profile, amounting to more than 900 milliseconds of work in my case. Let's click on this block, open the Selector Stats pane, and then sort by elapsed time: The DevTools selector stats pane with results sorted by elapsed time The more work a selector requires to match, and the more times it’s matched, the more potential wins we can get by improving this selector. In the list above, the following selectors seem interesting to look at:
    • `.gallery .photo .meta ::selection`
    • `.gallery .photo .meta li strong:empty`
    • `[class*=" gallery-icon--"]::before`
    • `.gallery .photo .meta li`
    • `*`
    • `html[dir="rtl"] .gallery .photo .meta li button`

    Improving the ::selection selector

    We use `.gallery .photo .meta ::selection` in the demo web page to style the background and text colors of user selections inside the photo metadata part of the page. When users select the text below a photo, custom colors are used instead of the browser default ones. This particular case is actually problematic because of a bug in the code. The selector should really be `.gallery .photo .meta::selection` instead, with no extra space between `.meta` and `::selection`. Because there's an extra space there, our selector is actually interpreted by the engine as: `.gallery .photo .meta *::selection` which makes it a lot slower to match during a style recalculation because the engine needs to check all DOM elements, and then verify if they're nested inside the right ancestors. Without the extra space, the engine only needs to check if the element has a class of `.meta` before going further.

    Improving the :empty selectors

    The selector `.gallery .photo .meta li strong:empty` looks suspicious at first sight. The `:empty` pseudo means that the selector only matches when the `strong` element doesn't have any contents. This might require the engine to do a bit more work than just checking the element's tag name but is very useful. However, looking at other CSS rules close to this one, we can see the following:
    .gallery .photo .meta li strong:empty {
      padding: .125rem 2rem;
      margin-left: .125rem;
      background: var(--dim-bg-color);
    }
    
    html[dir="rtl"] .gallery .photo .meta li strong:empty {
      margin-left: unset;
      margin-right: .125rem;
    }
    The same selector is repeated twice, but the second instance is prefixed with `html[dir=rtl]` which is useful to override the first rule when the text direction on the page is right to left. In this case, the rtl direction rule overrides the left margin and replaces it with a right margin. To improve this, we can use CSS logical properties. Instead of specifying a physical margin direction, we can use a logical one that will adapt to any text direction, as shown below:
    .gallery .photo .meta li strong:empty {
      padding: .125rem 2rem;
      margin-inline-start: .125rem;
      background: var(--dim-bg-color);
    }
    While we’re doing this, there are other places in the CSS code that use the same attribute selector which can be improved by using logical CSS properties. For example, we can get rid of the `html[dir="rtl"] .gallery .photo .meta li button` selector we found earlier.

    Improving the [class*=" gallery-icon--"] selector

    Our next selector is this complicated-looking attribute selector: `[class*=" gallery-icon--"]::before`. Attribute selectors can be very useful, so before removing them, check whether they're really having a negative impact. In our case, this selector does seem to play a role. Here are the CSS rules we use this selector for:
    [class*=" gallery-icon--"]::before {
      content: '';
      display: block;
      width: 1rem;
      height: 1rem;
      background-size: contain;
      background-repeat: no-repeat;
      background-position: center;
      filter: contrast(0);
    }
    
    .gallery-icon--camera::before { background-image: url(...); }
    .gallery-icon--aperture::before { background-image: url(...); }
    .gallery-icon--exposure::before { background-image: url(...); }
    ...
    The idea here is that we can assign any of these icon classes to an element and it'll get the corresponding icon. While this is a handy feature, we're asking the engine to read the class value and do a substring search on it. Here is one way we can help the engine do less work:
    .gallery-icon::before {
      content: '';
      display: block;
      width: 1rem;
      height: 1rem;
      background-size: contain;
      background-repeat: no-repeat;
      background-position: center;
      filter: contrast(0);
    }
    
    .gallery-icon.camera::before { background-image: url(...); }
    .gallery-icon.aperture::before { background-image: url(...); }
    .gallery-icon.exposure::before { background-image: url(...); }
    Now instead of using just one class, we'll need to add two classes to elements: `<div class="gallery-icon camera">` instead of `<div class="gallery-icon--camera">`. But, overall, the feature is still very easy to use and causes less work for the engine when there are many DOM nodes to re-style like in our demo page.

    Improving the .gallery .photo .meta li selector

    This selector looks really inoffensive. But, as described earlier, it still forces the browser to go and check multiple levels in the list of ancestors to the `li` element. Knowing that our web page has a lot of `li` elements, this can amount to a lot of work. We can simplify this by giving our `li` elements a specific class, and removing the unnecessary nesting. For example:
    .photo-meta {
      display: flex;
      align-items: center;
      gap: .5rem;
      height: 1.5rem;
    }

    Improving the * selector

    The `*` symbol is used as a universal selector in CSS that matches any element. This ability to match anything means that the engine needs to apply the associated rule to all elements. As we can see in our performance recording, this selector is indeed being matched many times. It's worth looking into what the CSS rule actually does. In our case, it applies a specific `box-sizing` value:
    * {
      box-sizing: border-box;
    }
    
    This is very common in CSS, but in our case, it actually makes sense to remove it, apply the `box-sizing` only where needed, and then see the gains.

    Results

    With all of these improvements done, it's time to check the performance of our scenario again. The Performance tool showing a significant improvement in elapsed time on the Recalculate Style block addressed above In the above performance recording, the same Recalculate Style block that was taking almost a second to run, is now taking around 300ms to run which a really big win!

    Conclusion

    The case study showed that improving certain CSS selectors can lead to important performance gains. It’s key to remember, however, that this will depend on your particular use case. Test the performance of your web page using the Performance tool, and if you find that style recalculations are making your scenarios slow, use the new Selector Stats pane in Microsoft Edge. As always, if you have any feedback for the DevTools team, please reach out to us by opening a new issue on our GitHub repository.]]>
    Sleeping Tabs in Microsoft Edge: Saving extra resources when you need it most https://blogs.windows.com/msedgedev/2022/12/06/sleeping-tabs-edge-105-sleep-before-discarding/ Tue, 06 Dec 2022 17:00:13 +0000 https://blogs.windows.com/msedgedev/?p=25197 To help users save memory and CPU resources, Microsoft Edge automatically puts tabs to sleep until you return to them. This keeps your browser fast and responsive, even if you use a large number of tabs.

    Starting in Microsoft Edge 105, we automatica

    The post Sleeping Tabs in Microsoft Edge: Saving extra resources when you need it most appeared first on Microsoft Edge Blog.

    ]]>
    we slept 1.38 billion tabs to relieve memory pressure on Windows devices as a part of this update. When memory usage is too high, many browsers discard tabs to save memory – but those pages must be fully reloaded before you can return to them. Sleeping tabs resume without reloading, so you can return to your work faster. Sleeping a tab saves 83% of its memory on average, so sleeping your high resource tabs can relieve memory pressure without slowing down your workflow in Microsoft Edge. We’re always listening to user feedback to improve performance! Share your experience or make a suggestion using the “Help and feedback” button under “…” (“Settings and more”). If you have any questions about sleeping tabs visit Learn about performance features in Microsoft Edge.]]>