The Radeon™ GPU Profiler

The Radeon GPU Profiler is a performance tool that can be used by developers to optimize DirectX®12, Vulkan®, OpenCL™ and HIP applications for AMD RDNA™ hardware. It is part of a suite of tools comprised of the following software:

  • Radeon Developer Mode Driver - This is shipped as part of the AMD public Adrenalin driver and supports the developer mode features required for profiling.

  • Radeon Developer Service (RDS) - A system tray application that unlocks the Developer Mode Driver features and supports communications with high level tools. A headless version is also available called RadeonDeveloperServiceCLI.

  • Radeon Developer Panel (RDP) - A GUI application that allows the developer to configure driver settings and generate profiler data from DirectX12, Vulkan, OpenCL and HIP applications.

  • Radeon GPU Profiler (RGP) - A GUI tool used to visualize and analyze the profile data.

    This document describes how to generate a profile using the Radeon Developer Panel and how the Radeon GPU Profiler can be used to examine the output profiles. The Radeon GPU Profiler is currently designed to work with compute applications and frame based graphics applications. It is specifically designed to address the issues that developers are dealing with in the move from traditional graphics APIs to explicit APIs. It also provides the visualization of RDNA hardware-specific information allowing the developer to tune an application to the full potential of the architecture. The tool provides unique visualizations of queue synchronization using fences and semaphores, asynchronous compute, and barrier timings. Currently, it supports the explicit graphics APIs (DirectX12 and Vulkan), compute APIs (OpenCL and HIP) and will NOT work with older graphics APIs such as DirectX11 or OpenGL.

Graphics APIs, RDNA hardware, and operating systems

Supported APIs

  • DirectX12

  • Vulkan

Supported RDNA hardware

  • AMD Radeon RX 7000 series

  • AMD Radeon RX 6000 series

  • AMD Radeon RX 5000 series

  • AMD Ryzen™ Processors with Radeon Graphics

Supported Operating Systems

  • Windows® 10

  • Windows® 11

  • Ubuntu 22.04 LTS (Vulkan only)

Compute APIs, RDNA hardware, and operating systems

Supported APIs

  • OpenCL

  • HIP

Supported RDNA hardware

  • AMD Radeon RX 7000 series

  • AMD Radeon RX 6000 series

  • AMD Radeon RX 5000 series

  • AMD Ryzen Processors with Radeon Graphics

Supported Operating Systems

  • Windows 10

  • Windows 11

Radeon GPU Profiler - Quick Start

How to generate a profile

The first thing you will need to do is generate a profile. Currently, this is done via the Radeon Developer Panel. Read the documentation provided with this distribution for information on how to capture a profile. This can be obtained from within the Radeon Developer Panel or from the link on the Radeon GPU Profiler “Welcome” view. The Radeon Developer Panel documentation can also be viewed online at: https://radeon-developer-panel.readthedocs.io/en/latest/

Starting the Radeon GPU Profiler

The following executables can be found in the download directory.

_images/rgp_executables.png

Start RadeonGPUProfiler.exe (this is the tool used to view profile data).

How to load a profile

There are a few ways to load a profile into RGP.

  1. Use the “File/Open profile” pull down menu item, or the “File/Recent profiles” pull down menu item.

_images/rgp_file_load.png _images/rgp_file_recent.png
  1. Go to the “Welcome” view and click on the “Open a Radeon GPU Profile…” link.

  2. Go to the “Welcome” view and click on a profile that you have previously loaded in the Recent list.

_images/rgp_welcome.png
  1. Go to the Recent profiles view to see a full list of all your recent profiles.

    Notice that there is additional information provided for each profile when viewed in this pane, such as the GPU the profile was taken on, the date when the capture was performed and the number of events contained in the profile.

_images/rgp_recent_profiles.png
  1. Load a profile into a new instance of the Radeon GPU Profiler from the Radeon Developer Panel. Select a profile in the list and click on “Open profile”.

_images/rdp_open_profile.png
  1. Drag and drop a profile onto the Radeon GPU Profiler executable, or onto an already open RGP instance.

The Radeon GPU Profiler user interface

There are four main menus in the Radeon GPU Profiler and each has a number of sub-windows. The two main UIs that deal with the analysis of the profile data are within the Overview and Events sections.

  1. Start

    1. Welcome - Shows links to help documentation, and a list of recently opened profiles, and a sample profile.

    2. Recent profiles - Displays a list of the recently opened profiles.

    3. About - Shows build information about RGP and useful links.

  2. Overview

    1. Frame Summary - Contains a summary of the structure of the graphics frame. This overview section is not available for OpenCL or HIP profiles.

    2. Profile Summary - Contains a summary of the structure of the OpenCL of HIP profile.

    3. Barriers - Details of the barrier usage in the profile.

    4. Context rolls - Details of the hardware context register usage. This overview section is not available for OpenCL or HIP profiles.

    5. Most expensive events - List of the most expensive events.

    6. Render/depth targets - Overview of render targets used throughout the graphics frame. This overview section is not available for OpenCL or HIP profiles.

    7. Pipelines - Details of the pipeline usage in the profile.

    8. Device configuration - Information about the GPU the profile was generated on.

  3. Events

    1. Wavefront occupancy - Shows detailed information about wavefront occupancy and event timings.

    2. Event timing - Tree view of profile events and their timing data.

    3. Pipeline state - Tree view of profile events and their graphics/compute pipeline state.

    4. Instruction timing - Shows detailed instruction timing information for each instruction of a single shader.

  4. Settings

    1. General - Adjust desired time units, state buckets, GPU boundness percentage, and wavefront view detail levels.

    2. Themes and colors - Customize colors for graphics API and hardware data.

    3. Keyboard shortcuts - Shortcuts for navigating various parts of the UI.

Settings

General

Check for updates on startup: Radeon GPU Profiler will check for an available update during startup. If an update is available, a notification will appear on the Welcome page

Units: This tells the profiler whether to work in clocks, nanoseconds, microseconds, or milliseconds. Refer to the keyboard binding in the section below to quickly toggle between these time units.

State buckets: Specify how the profiler should generate its own state buckets. This can be based off a combination of shader base address, depth buffer address, render target address and API PSO hash.

Sync event time windows: Keep the Wavefront occupancy and Event timing panes in sync while browsing through different time ranges.

Processor boundness: Specific to the Frame summary and Profile summary, this value will tell RGP at which point to consider an application as being GPU bound or CPU bound.

Wavefront occupancy detail: Increase the visual quality of wavefronts in the Wavefront occupancy pane. This allows users to see a more accurate representation of GPU occupancy at the expense of some profiler performance.

Themes and colors

The profiler makes heavy use of coloring to display its information. This pane allows users to thoroughly customize those colors.

_images/rgp_themes_and_colors_settings.png

NOTE: There are some coloring modes in RGP that use randomly-generated colors. These are the Color by event, Color by API PSO and Color by user events modes. In some situations, the randomly-generated colors can cause two very similar colors to be displayed near each other in the user interface, making it hard to distinguish between the similar colors. In order to alleviate this issue, the Random color seed setting allows the random seed to be altered, generating a different set of random colors.

_images/rgp_color_theme_drop_down.png

Color theme: The color theme can be changed with the “Color Theme” drop down combo box. This changes the application-wide background and text color. The “Light” option maintains RGP’s default look of white backgrounds with black text. The “Dark” option changes RGP to have a dark background color with lighter color text. The “Detect OS” option uses the system’s color theme to determine whether the color theme should be light or dark. If the system’s color theme cannot not be detected, RGP will default to light theme. If the system’s color theme is changed while RGP is open with the “Detect OS” option selected it will not apply until the application has been restarted. On Windows operating systems when changing the color theme a pop-up prompt will recommend restarting the application. This is because not all parts of RGP will update to a change in color theme until the application is restarted. Changing the color theme will not change any other color customization options that have been selected.

_images/rgp_color_theme_changed_prompt.png

This is an example of how RGP will look when the color theme is changed to dark:

_images/rgp_dark_theme_frame_summary_pane.png

Keyboard shortcuts

Here users will find the Keyboard shortcuts pane:

_images/rgp_keyboard_shortcuts_settings.png

The System activity, Wavefront occupancy and Event timing shortcuts are specific to zooming and panning operations that can be performed within the Frame summary and Events subtabs. See the section entitled Zoom Controls for more information.

_images/rgp_tabs_1.png _images/rgp_tabs_2.png

The Event timeline section refers to panning and event selection operations for the bottom graph within the Wavefront occupancy view.

The Instruction timing section refers to keystrokes to change API PSO, event and export selection.

The ISA Viewer (in Pipeline state and Instruction timing) section refers to keystrokes to jump to a specific instruction, search for text, expand or collapse blocks of code, traverse through navigation history and toggle line numbers.

The Global navigation section refers to keystrokes that aid user navigation, and are always detected regardless of which pane is visible.

The Global hotkeys section refers to any hotkeys available anywhere in the product. Pressing CTRL + T allows the user to quickly cycle through the different time units (cycles, milliseconds, microseconds or nanoseconds) from any pane, rather than having to go to the settings. The user can also open or close a profile from any pane using the Global hotkeys.

We encourage all users to adopt these keystrokes while using RGP.

UI Navigation

In an effort to improve workflow, RGP supports keyboard shortcuts and back and forward history to quickly navigate throughout the UI.

RGP tracks navigation history, which allows users to navigate back and forward between all of RGP’s panes. This is achieved using global navigation hotkeys shown above, or the back and forward buttons shown below:

_images/rgp_navigation.png

Currently, back and forward navigation is restricted to pane switches and moving between events within a pane.

Overview Windows

Frame summary (DX12 and Vulkan)

This window describes the structure of a profile from a number of different perspectives.

_images/rgp_frame_summary_1.png

The System activity section displays a system-level view of sync operations and when command buffers were submitted to the GPU. Speaking in general terms, all profiles contain two types of data: command buffer timing data and SQTT timing data. This pane displays the former, and the rest of RGP displays the latter.

Along the top, we find a series of controls:

  • GPU and CPU based frames: Controls how to display frame boundaries, which are also bracketed by black markers. The difference in time between both modes can help to visualize latency between workload submission and execution. The driver provides each command buffer with a frame number, a CPU submit timestamp, a GPU start timestamp, and a GPU end timestamp.

    • GPU-based frames: Interprets frame boundaries to begin when a present finished on the GPU.

    • CPU-based frames: Interprets frame boundaries to begin when a present was submitted on the CPU.

  • Workload views: Provide twelve different ways to view the same data:

    • Command buffers: Shows a list of all command buffers in a submission. Disabling this will condense all command buffers into a single submission block which also specifies the number of contained command buffers.

    • Sync objects: Toggles whether to display signals and waits.

    • Sequential: An alternate view which shows data linearly as opposed to stacked. The dark right-most portion of command buffers and submits indicates execution time on the GPU.

    • GPU only: A flat view of the data which represents solely GPU work. This helps visualize parallelism among all GPU queues.

  • CPU submission markers: Draw vertical lines to help visualize when the CPU issued certain types of workloads to the GPU.

  • Zoom controls: Consistent with the rest of the tool, these allow users to drill down into points of interest. More information can be found under the Zoom Controls section.

In the middle, we find the actual view. Each queue (Graphics, Compute, Copy) gets its own section. The alternating grey and white backgrounds indicate frame boundaries. The blue region indicates which command buffers were profiled with SQTT data, for more detailed event analysis in other sections of the tool. Note that command buffers are visualized using two shades of the same color. The lighter shade represents time spent prior to reaching the GPU, and the darker shade represents actual execution.

Please note that the view is interactive, making it possible for users to select and highlight command buffers, sync objects, and submission points.

Users can correlate between command buffer timing data and SQTT data by right-clicking on a command buffer within the “Detailed GPU events” region. This will bring up a context menu which contains three menu items for finding the first event within the selected command buffer. Selecting one of the menu items will navigate to the appropriate pane and set focus on the specified event.

Along the bottom, we find information about user selections:

  • Submit time: Specifies when work was issued by the CPU

  • Submit duration: Specifies the full duration of the submit

  • Enqueue duration: Specifies how long the work was queued before beginning on the GPU

  • GPU duration: Specifies how long the GPU took to execute it.

Below the queue timings view we find the following summary:

_images/rgp_frame_summary_2.png

This shows an interpretation of queue timings data to determine which processor is the bottleneck. By default, if the GPU is idle more than 5% of the time then the profile is considered to be CPU-bound. This percentage may be adjusted in RGP settings.

Please note that the values displayed for Frame duration and Frame rate are sourced from SQTT data. In other words, they are based on duration and shader clock frequency used in other RGP panes such as Wavefront occupancy.

The Profiling overhead shows the amount of profiling data that was written to video memory by the hardware while gathering the RGP profile. The profiling overhead is also expressed in terms of memory bandwidth used to write the data. The profiling overhead is comprised of both SQTT data and the cache counter data collected while profiling.

The Queue submissions and Command buffers pie charts show the number of queue submissions and command buffers in the frame broken down by the Direct and Compute queues. Compute submissions are colored in yellow and graphics submissions are colored in light blue. The Sync Primitives section counts how many unique signal and wait objects were detected throughout the profile. Please note that only signals and waits from queue operations are included in the profile data. For instance, any Vulkan signals originating from vkAcquireNextImageKHR will not appear since that is not a queue operation.

_images/rgp_frame_summary_3.png

The Event statistics pie chart and table show the event counts colored by type. In the above example there are 281 Dispatch and 1,633 DrawIndexedInstanced events. The Instanced primitives histogram shows the number of events that drew N (1 to 16+) instances. In the example above we see that most events drew just a single instance, whereas a lesser number of events drew 2-9 and 16 instances.

_images/rgp_frame_summary_4.png

Geometry breakdown gives a summary of the vertices, shaded primitives, shaded pixels, and instanced primitives. In the above example we can see that the GS is being used to expand the number of shaded primitives. Also, looking at the Rendered Primitives histogram we can see that one draw uses between 0 and 1K primitives, and the other draw call uses 11k or more primitives. This makes sense given that the profile is from the D3D12nBodyGravity SDK sample.

Profile summary (OpenCL or HIP)

This window describes the structure of a profile from a number of different perspectives.

_images/rgp_profile_summary_1.png

The System activity section displays a system-level view of when command buffers were submitted to the GPU. Speaking in general terms, all profiles contain two types of data: command buffer timing data and SQTT timing data. This pane displays the former, and the rest of RGP displays the latter. For OpenCL applications multiple dispatches that can be submitted without host synchronization are grouped into command buffers automatically by the OpenCL driver. This grouping reduces submission overhead.

Along the top, we find a series of controls:

  • Workload views: Provide twelve different ways to view the same data:

    • Sequential: An alternate view which shows data linearly as opposed to stacked. The dark right-most portion of command buffers and submits indicates execution time on the GPU.

    • GPU only: A flat view of the data which represents solely GPU work. This helps visualize parallelism among all GPU queues.

  • CPU submission markers: Draw vertical lines to help visualize when the CPU issued certain types of workloads to the GPU.

  • Zoom controls: Consistent with the rest of the tool, these allow users to drill down into points of interest. See the section entitled Zoom Controls for more information.

In the middle, we find the actual view. Each queue applicable to OpenCL or HIP (Compute, Copy) gets its own section. Note that command buffers are visualized using two shades of the same color. The lighter shade represents time spent prior to reaching the GPU, and the darker shade represents actual execution.

Please note that the view is interactive, making it possible for users to select and highlight command buffers, sync objects, and submission points.

Along the bottom, we find information about user selections:

  • Submit time: Specifies when work was issued by the CPU

  • Submit duration: Specifies the full duration of the submit

  • Enqueue duration: Specifies how long the work was queued before beginning on the GPU

  • GPU duration: Specifies how long the GPU took to execute it.

    Below the queue timings view we find the following summary:

_images/rgp_profile_summary_2.png

This shows an interpretation of queue timings data to determine which processor is the bottleneck. By default, if the GPU is idle more than 5% of the time then the profile is considered to be CPU-bound. This percentage may be adjusted in RGP settings.

Please note that the value displayed for Profile duration is sourced from SQTT data. In other words, it is based on duration and shader clock frequency used in other RGP panes such as Wavefront occupancy.

The Profiling overhead shows the amount of SQTT data that was written to video memory by the hardware while gathering the RGP profile. The profiling overhead is also expressed in terms of memory bandwidth used to write the SQTT data.

The Event statistics pie chart and table show the event counts. For OpenCL, the items are colored by OpenCL API type. For HIP, the items are colored by either kernel name (for dispatches) or HIP API type (for other events). In the example below, there are 89 clEnqueueNDRangeKernel calls and 7 clEnqueueFillBuffer calls. The meaning of CmdBarrier() is explained in the Barriers section.

_images/rgp_profile_summary_3.png

Barriers

The developer is now responsible for the use of barriers in their application to control when resources are ready for use in specific parts of the frame. Poor usage of barriers can lead to poor performance but the effects on the frame are not easily visible to the developer - until now. The Barriers UI gives the developer a list of barriers in use on the graphics queue, including the additional barriers inserted by the driver.

Note that in older profiles or if the barrier origin isn’t known, all barriers and layout transitions will be shown as ‘N/A’. Using an up-to-date display driver will ensure that this information is available.

_images/rgp_barriers_1.png

The summary at the top left of the UI quickly lets the developer know if there is an issue with barrier usage in the frame. When calculating the percentage, only portions of a barrier’s duration which are not overlapped by one or more events from any queue are taken into consideration. For instance, if a barrier has a duration of 100 ns, but 80 ns of that barrier’s duration are overlapped by other events (on the same queue or on a different queue), then only 20 ns of that particular barrier contributes to the percentage calculation. In the case shown above, the barrier usage is taking up 0% of the frame.

This summary also displays the average number of barriers per draw or dispatch and the average number of events per barrier issue.

The table shows the following information:

  1. Event Numbers - ID of the barrier - selecting an event in this UI will select it on the other Events windows

  2. Duration - Lifetime of the barrier

  3. Drain time - This is the amount of time the barrier spends waiting for the pipeline to drain, or work to finish. Once the pipeline is empty, new wavefronts can be dispatched

  4. Stalls - The type of stalls associated with the barrier - where in the graphics pipe we need the work to drain from

  5. Layout transitions - A blue check box indicates if the barrier is associated with a layout transition. There are six columns indicating the type of layout transition. These are described in the Layout transition section below.

  6. Invalidated - A list of invalidated caches

  7. Flushed - A list of flushed caches

  8. Barrier type - Whether the barrier originated from the application or from the driver (or ‘N/A’ if unknown)

  9. Reason for barrier - In the case of driver-inserted barriers, a brief description of why this barrier was inserted

    The rows in the table can be sorted by clicking on a column header.

    NOTE: Selecting a barrier in this list will select the same event in the other Event windows.

    The user can also right-click on any of the rows and navigate to the Wavefront occupancy, Event timing, Instruction timing or Pipeline state panes and view the event represented by the selected row in these panes, as well as in the side panels. The user can also see the parent command buffer in the Frame summary pane or navigate to the Render/depth targets view and view the event in the timeline.

    Below is a screenshot of what the right-click context menu looks like:

_images/rgp_barriers_2.png

Layout Transitions

The following Layout Transition columns are shown in the Barriers table:

  1. Depth/Stencil Decompress: This barrier is emitted when a depth/stencil surface is decompressed. Depth/stencil surfaces are often stored compressed to reduce bandwidth to and from the color and depth hardware units.

  2. HiZ Range Resummarize: This barrier is emitted when a depth/stencil buffer, which has corresponding hierarchical Z-buffer data, is modified. This barrier ensures that the modified data is reflected into the hiZ-buffer, allowing for correct culling and depth testing.

  3. DCC Decompress: This barrier is emitted when Delta Color Compression compressed color data needs to be decompressed.

  4. FMask Decompress: This barrier is emitted when FMask data is decompressed. FMask is used to compress MSAA surfaces. These surfaces must be decompressed before they can be read by texture hardware units.

  5. Fast Clear Eliminate: This barrier is emitted when the driver performs a fast clear. For fast clears, a barrier is needed to read the clear color before filling the render target. Clearing to specific values (typically 0.0 or 1.0) may allow the GPU to skip the eliminate operation.

  6. Init Mask RAM: This barrier is emitted when the driver uses a shader to initialize memory used for compression.

See https://gpuopen.com/dcc-overview/ for more information on what may cause a DCC Decompress or what “clear” values can be used to skip Fast Clear Eliminates.

Barriers and OpenCL/HIP

Barriers for OpenCL or HIP profiles provide visibility into how the driver scheduled dispatches to the GPU and dependencies between kernel dispatches. These barriers are the same synchronization primitives used by DirectX12 and Vulkan that are described above.

The barriers shown in an OpenCL or HIP profile correspond to the barriers inserted by the OpenCL or HIP driver for one of the following reasons.

  1. Data Dependencies - There are data dependencies between subsequent dispatches. For example, reading the results of a previous kernel dispatch. This causes barriers to be inserted so that caches can be invalidated.

  2. Queue Profiling - (OpenCL-specific) The application has enabled profiling CL_QUEUE_PROFILING_ENABLE property when creating a command queue. This causes barriers to be inserted so that timestamps can be recorded.

OpenCL command queues process dispatches one after another and it is common for a subsequent kernel dispatch to use the results of a previous kernel dispatch. For this reason, it can be expected that an RGP profile will have a large number of barriers.

A barrier from a typical HIP application is shown below.

_images/rgp_barriers_opencl_1.png

As we see, the time taken due to barriers is typically very small since inter-dispatch dependencies only cause cache invalidations.

_images/rgp_barriers_opencl_2.png

It should be noted that the meaning of barriers in RGP for OpenCL/HIP is different from OpenCL or HIP built-in synchronization APIs. For example, barriers that appear in an OpenCL RGP profile are not related to the OpenCL synchronization APIs based on cl_event or cl_barrier. For this reason, the barriers seen in OpenCL/HIP profiles are displayed as CmdBarrier() which is not a part of the OpenCL or HIP API. For these profiles, RGP does not currently show API-specific events or host synchronization.

Context rolls

NOTE: This UI is only available for DirectX and Vulkan profiles.

Context rolling is a hardware feature specific to the RDNA and GCN graphics architecture and needs to be taken into consideration when optimizing draws for AMD GPUs. Each draw requires a set of hardware context registers that describe the rendering state for that specific draw. When a new draw that requires a different render state enters the pipeline, an additional set of context registers is required. The process of assigning a set of context registers is called context rolling. A set of context registers follows the draw through the graphics pipeline until it is completed. On completion of the draw, that associated set of registers is free to be used by the next incoming draw.

On RDNA and GCN hardware there are 8 logical banks of context registers, of which only seven are available for draws. The worst-case scenario is that 8 subsequent draws each require a unique set of context registers. In this scenario the last draw has to wait for the first draw to finish before it can use the context registers. This causes a stall that can be measured and visualized by RGP. On RDNA2 hardware, while there are still 8 banks of context registers, one entire bank, typically bank 2, is reserved by the hardware and will typically appear completely empty in the Context rolls pane.

_images/rgp_context_rolls_1.png

In the example above, a DirectX 12 application, we can see that there are 223 context rolls in the frame and none of them are redundant. The Radeon GPU Profiler compares the context register values across state changes to calculate if the context roll was redundant. Redundant context rolls can be caused by the application and the driver. Ineffective draw batching can be a cause on the application’s end.

In addition, the meter shows the number of context rolls as a percentage of the number of draw calls, giving a visual indication of how efficient the frame is with regards to changing state. A lower percentage indicates that, on average, more draw calls are sharing state across the frame. This meter also shows a breakdown of Active vs. Redundant context rolls.

The chart to the right shows the number of events in each context.

The table underneath shows the state from the API’s perspective, and which parts of the state were involved in context rolls. The first column indicates how many context rolls it was involved in. The second column indicates how many of these changes were redundant with respect to the state (the state was written with the exact same value or another piece of state was changed). The next column indicates the number of context rolls that were completely redundant (the whole context was redundant, not just the state). The final column shows the number of context rolls of this state where this was the only thing that changed in the event.

_images/rgp_context_rolls_2.png

Selecting an API-state shows all the draw calls in the second table, called the Events table, that rolled context due to this state changing, with or without other states changing too.

The Filter API-states… field in the top-right corner of the state table filters the state tree in real-time as you type. Only the state containing the filter text string will be shown.

NOTE: Selecting an event in this list will select the same event in the other Event windows.

The user can also right-click on any of the rows and navigate to Wavefront occupancy, Event timing or Pipeline state panes and view the event represented by the selected row in these panes, as well as in the side panels. Below is a screenshot of what the right-click context menu looks like.

_images/rgp_context_rolls_3.png

NOTE: When selecting events on the event panes and using the right-click context menu to jump between panes, the option to “View in context rolls” will only be available if the selected event is currently present in the events table on the context rolls pane.

Most expensive events

The Most Expensive events UI allows the developer to quickly locate the most expensive events by duration. At the top of the window is a histogram of the event durations. The least expensive events are to the left of the graph and the most expensive to the right. A blue summary bar with an arrow points to the bucket that is the most costly by time. The events in this bucket are most in need of optimization. The double slider below the chart can be used to select different regions of the histogram. The summary and table below will update as the double slider’s position is changed. In the example below we can see that the most expensive 5% of events take 51% of the frame time.

Below the histogram is a summary of the frame. In this case, the top 15% of events take 99% of the frame time, with 52% of the selected region consisting of graphics events and 48% async compute events.

The table below the summary shows a list of the events in the selected region with the most expensive at the top of the list.

_images/rgp_most_expensive_events_1.png

NOTE: Selecting an event in this list will select the same event in the other Event windows.

The user can also right-click on any of the rows and navigate to Wavefront occupancy, Event timing or Pipeline state panes and view the event represented by the selected row in these panes, as well as in the side panels. Below is a screenshot of what the right-click context menu looks like.

_images/rgp_most_expensive_events_2.png

The API Shader Stage Control shown in the last column of the table indicates which API shader stages are active in the pipeline used by the given event.

Render/depth targets

NOTE: This UI is only available for DirectX and Vulkan profiles.

This UI provides an overview of all buffers that have been used as render targets in draw calls throughout the frame.

_images/rgp_render_targets_overview_1.png

The screen is split into two sections, a timeline view and a tree view listing:

_images/rgp_render_targets_overview_2.png

The graphical timeline view illustrates the usage of render targets over the duration of the frame. Other events like dispatches, copies, clears and barriers are shown at the bottom of this view.

Zoom controls can be used to focus in on a section of the timeline. More information on zoom controls can be found under the Zoom Controls section. Each solid block in this view represents a series of events that overlap and draw to the same render target within the same pass. A single click on one of these highlights the corresponding entry in the tree view.

_images/rgp_render_targets_overview_3.png

This section lists all of the render targets and their properties found in the frame. Based on the active grouping mode it either shows a top-level listing of render targets or passes. The grouping can be configured in two ways:

  • Group by target The top level consists of all render targets found in the frame, plus per-frame stats. Child entries show per-pass stats for each render target.

  • Group by pass The top level consists of all passes found in the frame. Child entries show per-pass stats for each render target.

Here are the currently available columns:

  • Legend The color of the render target in the timeline.

  • Name The name of the render target. Currently this is sequential and based on the first occurrence of each render target in the frame.

  • Format The format of each render target.

  • Width Width of the render target.

  • Height Height of the render target.

  • Draw calls Number of draw calls that output to this render target.

  • Compression Indicates whether compression is enabled for this render target or not.

  • Sample count MSAA sample count of the render target.

  • Out of order draw calls Number of out of order draw calls issued to this render target. This column is not shown for profiles taken on RDNA GPUs.

  • Duration The total duration of all the events that rendered to the render target. For example, if 3 events write to a depth buffer the duration will be the sum of these 3 event durations.

The rows in the table can be sorted by clicking on a column header.

NOTE:

  • Selecting any item in either the timeline view or the tree view will select the corresponding item in the other view.

  • Selecting any item in either the timeline view or the tree view will select the earliest event represented by that item in other sections of the tool.

Pipelines

This overview pane provides details of the pipeline usage in the profile.

_images/rgp_pipeline_summary_1.png

The pane is divided into three sections:

Pipeline summary - Displays a list of each pipeline API configuration found in the profile.

Pipelines - Displays a table with an entry for each pipeline found in the profile and child entries for each shader stage active in the pipeline.

Events - Displays all events that use the selected pipeline in the Pipelines table.

Pipeline summary

_images/rgp_pipeline_summary_2.png

The pipeline summary section displays all unique pipeline configurations colored by API shader stage.

  • Unique is defined as having the same active API shader stages

Next to each configuration is a count of how many pipelines in the profile matched the configuration.

Pipelines

_images/rgp_pipeline_summary_3.png

The Pipelines section contains a table with an entry for each pipeline found in the profile.

Each entry in the table displays the following information:

  1. Bucket ID - ID to match pipeline to event state bucket used for grouping in other panes.

  2. Hash - 128-bit pipeline hash and API shader hash.

  3. Duration - The pipeline duration is the sum of the durations of all events which use this pipeline (overlapped areas only counted once). The shader stage duration displayed for child items in the table is the sum of the stage-specific shader durations for all events which use this pipeline (overlapped areas are only counted once).

  4. Event count - Number of events which use the pipeline and percentage out of total number of events in profile.

  5. Avg event duration - Average duration of events using this pipeline in the profile.

  6. Occupancy - Occupancy range and per-shader-stage occupancy for each pipeline.

  7. VGPRs - VGPR range and per-shader-stage VGPR usage for each pipeline.

  8. SGPRs - SGPR range and per-shader-stage SGPR usage for each pipeline.

  9. Scratch mem - Yes/No to indicate if the pipeline uses scratch memory.

  10. Wave mode – wave32/wave64 to indicate the mode of the shader. This column only appears for devices that support wave32 vs. wave64.

  11. Stages - The API Shader Stage Control indicating which stages are active for given pipeline.

The Filter pipelines… field can be used to filter items in the list by the API PSO hash. The Pipelines table can be sorted by clicking on a column header.

Below the table, the Bucket ID, API PSO hash and Driver internal pipeline hash for the currently-selected pipeline is displayed. There is also a quick link to view the selected pipeline in the Pipeline state view. This will navigate to the Pipeline state view for the first event associated with the pipeline.

Events

_images/rgp_pipeline_summary_4.png

The Events table displays all events which use the currently-selected pipeline in the Pipelines table.

Each entry in the table displays the following information:

  1. Event ID - ID for event

  2. Event - Event text displaying the API or Driver call for event

  3. Duration - Time event spent during frame in profile

The Events table can be sorted by clicking on a column header.

As with all event lists in RGP, the user can right-click to quickly navigate to the event in other panes.

_images/rgp_pipeline_summary_5.png

Device configuration

This UI reports the GPU configuration of the system that was used to generate the profile. The Radeon Developer Panel can retrieve profiles from remote systems so the GPU details can be different from the system that you are using to view the data. The clock frequencies refer to the clock frequency running when the capture was taken. The number in parentheses represents the peak clock frequency the graphics hardware can run at.

_images/rgp_device_configuration.png

Events Windows

This section of RGP is where users will perform most analysis at the event level. An RGP event is simply an API call within a command buffer that was issued by either the application or the driver.

The event windows allow filtering of the event string. The event string consists of the event index, the API call and parameters. Only events containing the filter string will be displayed. This works for the whole event string, not just the event index. For example, if the filter string is ‘8’, event 31 may be displayed if any of its parameters contain ‘8’.

Wavefront occupancy

This section presents users with an interactive timeline that shows GPU utilization, GPU counter data, and all events in the profile.

_images/rgp_wavefront_occupancy_1.png

There are five components, the Legend side panel, the Wavefront timeline view, one or more Counter views, the Events timeline view, and the Details side panel.

Note that the counter views are only available if the “Collect counters” checkbox is enabled in RDP.

Legend side panel

Pressing Hide Legend on the top left will hide the side panel with visualization controls and a color coded legend for each view. The contents of each individual legend depends on its view.

_images/rgp_wavefront_occupancy_legend_1.png

Wavefront timeline view

This section shows how many wavefronts were in flight. All wavefronts are grouped into buckets which are represented by vertical bars. The top half shows wavefronts on the graphics queue, and the bottom half shows wavefronts on the async compute queue.

_images/rgp_wavefront_occupancy_2.png

Users may examine regions by selecting ranges within the graph and using the zoom buttons on the top right of the tab. Users may also hover over this view and use mouse wheel to zoom and center in on a particular spot. A region of wavefronts can be selected by using the mouse button to drag over the desired region as shown below.

_images/rgp_wavefront_occupancy_3.png

You can zoom into the region by selecting Ctrl + Z, or by clicking on “Zoom to selection” (result shown below).

_images/rgp_wavefront_occupancy_4.png

You can also drag the graph if you are zoomed in. Hold down the space bar first, then hold the mouse button down. The graph will now move with the mouse.

Users may use the Color by combo-box on the top of the Wavefront occupancy legend to visualize wavefronts in different ways:

  • Color by API stage. Default. Shows which wavefronts correspond to which Vulkan/DX12 pipeline stage.

  • Color by RDNA (or GCN) shader stage. Shows which wavefronts correspond to which RDNA/GCN pipeline stage.

  • Color by hardware context. Shows which hardware context (0-7) the wavefronts ran on. This can be useful to visualize the amount of context rolls that occurred.

  • Color by shader engine. Shows which shader engine the wavefronts ran on.

  • Color by event. Shows which wavefronts correspond to which event of the profile. Each event is assigned a unique color.

  • Color by pass. Groups wavefronts into different passes depending on which render target or attachment type (color, depth-only, compute, raytrace). These four types are assigned a base color, and each pass within each type is assigned a different shade of the base color. This can be useful to visualize when the application attempted to render different portions of a scene.

  • Color by API PSO Shows which wavefronts correspond to which shader, based on the shader’s API PSO hash value.

  • Color by ray tracing Shows which wavefronts correspond to shaders which perform ray tracing. Wavefronts from traditional ray tracing events as well as wavefronts from shaders with inlined ray tracing will be shown using the specified ray tracing color. All other waves will be shown as grey.

Color modes can be synchronized across the Wavefront occupancy and Event timing panes. To do this, simply hold down the Ctrl key when selecting a mode from any Color by combo box. The selected color mode will be used for the Wavefront timeline and the Event timeline in the Wavefront occupancy pane as well as for the Event timing pane.

Beneath the Color by combo-box there is another combo-box to help visualize the occupancy of certain RDNA or GCN pipeline stages. Beneath the pipeline stage combo-box is a color coded legend which serve as color reminders. Note these colors can be customized within Settings.

The RGP wavefront occupancy for OpenCL or HIP has only compute in the wavefront occupancy. This is because compute APIs such as OpenCL or HIP only dispatch compute shader waves. For this same reason, a number of the coloring options such as hardware context and RDNA/GCN stages are not applicable for OpenCL or HIP.

_images/rgp_wavefront_occupancy_opencl.png

Cache counters

This section visualizes the cache counter data collected while profiling. Cache counter data is only available on Radeon RX 5000 series and newer GPUs. While profiling, counter data is sampled at a fixed rate, roughly one sample every 4096 clock cycles.

_images/rgp_wavefront_occupancy_counters_1.png

Each counter is presented as a line graph that shows how the value of that particular counter varies through the frame. By correlating the counter data with both wavefront occupancy and the events in the frame, you can get a better understanding of how well different parts of the frame utilize the various GPU caches.

There are currently five cache counters collected while profiling. Each cache counter reports a hit percentage, which is the percentage of requests that hit data already in the cache.

  • Instruction cache hit The percentage of read requests made that hit the data in the Instruction cache. The Instruction cache supplies shader code to an executing shader. Each request is 64 bytes in size.

  • Scalar cache hit The percentage of read requests made from executing shader code that hit the data in the Scalar cache. The Scalar cache contains data that does not vary in each thread across the wavefront. Each request is 64 bytes in size.

  • L0 cache hit The percentage of read requests that hit the data in the L0 cache. The L0 cache contains vector data, which is data that may vary in each thread across the wavefront. Each request is 128 bytes in size.

  • L1 cache hit The percentage of read or write requests that hit the data in the L1 cache. The L1 cache is shared across all WGPs in a single shader engine. Each request is 128 bytes in size.

  • L2 cache hit The percentage of read or write requests that hit the data in the L2 cache. The L2 cache is shared by many blocks across the GPU, including the Command Processor, Geometry Engine, all WGPs, all Render Backends, and others. Each request is 128 bytes in size.

The description of each counter can be viewed by hovering the mouse over the counter name in the legend left of the counter graph.

The sizes of the L0, L1 and L2 caches, which may vary depending on the GPU, are reported in the Device configuration pane in the Overview tab.

Users may use the legend on the left to choose which counters to include in the graph.

_images/rgp_wavefront_occupancy_counters_2.png

Each counter key in the legend is a tri-state button. Pressing the button cycles through 3 states: visible, visible and selected, and not visible.

Selecting a counter will cause the area under the line for the selected counter to be filled in. This can be done for one or more counters simultaneously. In this image, the user has clicked the color boxes for both the L1 and L2 cache hit counters.

_images/rgp_wavefront_occupancy_counters_4.png

A tooltip will be shown when the mouse hovers over the counter graphs. This tooltip shows the counter value of the closest point to the cursor, as well as the number of Requests, Hits, and Misses associated with that point. When a region is selected in the wavefront occupancy view, the tooltip will show aggregated data representing the selected region. Pressing the Ctrl key on the keyboard will temporarily hide the tooltip.

_images/rgp_wavefront_occupancy_counters_3.png

Collection of cache counters can be disabled when capturing a profile in the Radeon Developer Panel. In this case, the cache counter graphs will not be visible.

For a better understanding of the cache memory hierarchy for RDNA hardware, please refer to the following visual representation. This is taken from the RDNA architecture presentation found on gpuopen.com.

_images/rgp_rdna_cache_hierarchy.png

Ray tracing counters

When profiling a game that uses ray tracing, a second row of counter data will show ray tracing counter values. These counters are only available on Radeon RX 6000 series and newer GPUs.

_images/rgp_wavefront_occupancy_counters_5.png

There are currently two ray tracing counters collected while profiling. These counters provide information on the number of ray tests performed by the frame.

  • Ray box tests The number of ray box intersection tests.

  • Ray triangle tests The number of ray triangle intersection tests.

The user interaction for the ray tracing counters is identical to the user interaction for the cache counters.

Events timeline view

This section shows all events in your profile. This includes both application-issued and driver-issued submissions. Each event can consist of one or more active shader stages and these are shown with rectangular blocks. The longer the block, the longer the shader took to execute. If there is more than 1 shader active, then each shader stage is connected with a thin line to indicate they belong to the same event. This view just shows actual shader work; it doesn’t show when the event was submitted.

_images/rgp_wavefront_occupancy_5.png

Users may single-click on individual events to see detailed information on the details side panel described below. Zooming into this graph is done by selecting the desired region in the wavefront graph above. Additionally, zooming in on a single event can be done by selecting the event and clicking on ‘Zoom to selection’. More information can be found under the Zoom Controls section.

Users may use the Color by combo-box on the left to visualize events in different ways:

  • Color by queue. Default. Shows which events were submitted to graphics or async compute queues. In addition, the CP marker is shown in a unique color, as well as the barriers and layout transitions so they can be easily distinguished. Note that barrier and layout transitions originating from the driver are colored differently to those from the application, and this is shown in the legend below the timeline view.

  • Color by hardware context. Shows which events ran on which context. This can be useful to visualize the amount of context rolls that occurred.

  • Color by event. Will show each event in a unique color.

  • Color by pass. Groups events into different passes depending on which render target or attachment type (color, depth-only, compute). These three types are assigned a base color, and each pass within each type is assigned a different shade of the base color. This can be useful to visualize when the application attempted to render different portions of a scene.

  • Color by command buffer. Shows each event in a color associated with its command buffer, so making it easy to see events are in the same command buffer.

  • Color by user events. Will colorize each event depending on which user event it is surrounded by.

  • Color by API PSO will color events by their API PSO hash values.

  • Color by instruction timing will only colorize events which contain detailed instruction timing information. All other events will be greyed out.

  • Color by ray tracing will only colorize raytracing events. All other events will be greyed out.

Beneath the Color by combo-box is the Event filter combo-box. This allows the user to visualize only certain types of events on the timeline. For example, the user can select to see draws, dispatches, clears, barriers, layout transitions, copies, resolves, events containing instruction trace data, and raytracing events. There is also an option to switch the CP marker on or off. Switching the CP marker off will just show the active shader blocks.

Beneath the Event filter combo-box is the Overlay combo-box. This allows the user to select which fixed “Overlays” to show in the timeline. Overlays are shown in one or more rows at the top of the timeline. They are useful to visualize the various states for each event. More than one Overlay can be enabled. The following Overlays are supported:

  • All. All available overlays will be shown

  • User events. Default. Displays all user events, if the captured frame contains any such events. The user events are stacked according to the nesting level, and a cross pattern indicates multiple overlapping user event regions. Moving the mouse cursor over one of the user events will show a tool-tip listing all user events under the cursor including timing information for each user event interval.

  • Hardware context. Displays all hardware contexts. Each hardware context has its own row. This allows the user to visualize the lifetime of each context.

  • Command buffer. Displays all command buffers. The command buffers are stacked according to the time of submission, so that if one command buffer is submitted before a previous command buffer has completed, the new command buffer will be stacked below the previous command buffer.

  • Render target. Displays all render targets. If more than one render target is active for a given time period, then the active render targets will be stacked. This allows the user to visualize the usage of render targets over the duration of the frame.

The event duration percentile filter allows users to only see events whose durations fall within a certain percentile. For example, selecting the rightmost-region of the slider will highlight the most expensive events. When using the slider buttons on the duration percentile filter, a tooltip will display the time duration range that corresponds to the selected percentiles. One will also find a textbox to filter the timeline by event name.

_images/rgp_wavefront_occupancy_7.png

The same zooming and dragging that is available on the wavefront timeline view is also available here.

Lastly, there are colored legends on the bottom which serve as color reminders. Note these colors can be customized within Settings.

Details side panel

Pressing Hide Details on the top right will hide the side panel with more in-depth information. The contents of this panel will change, depending on what the user last selected. If a single event was selected in the Events timeline the details side panel will look like below:

_images/rgp_details_panel_1.png

The Details side panel for a single event contains the following data:

  • The event’s API call name

  • The queue it was launched on

  • User event hierarchy (if present)

  • Start, End, and Duration timings

  • Hardware context and if it was rolled

  • The API shader hashes for all shaders used by the event

  • The API PSO hash for the event

  • The Driver internal pipeline hash for the event

  • Colored bar showing wavefront distribution per RDNA or GCN hardware stage

  • List of RDNA or GCN hardware stages and wavefront counts

  • Total wavefront count

  • Total threads

  • RDNA or GCN shader timeline graphic showing active stages and duration

  • A table showing resource usage for each API shader stage:

    • The VGPR and SGPR columns refer to the vector and scalar general purpose registers being used, and the number of registers that have been allocated shown in parentheses.

    • The LDS column refers to the amount of Local Data Store that each shader stage is using, reported in bytes.

    • The Occupancy column refers to the Theoretical wavefront occupancy for the shader. This is reported ‘A / B’, where A is the number of wavefronts that can be run and ‘B’ is the maximum number of wavefronts supported by the hardware.

    • Tooltips explaining the data are available by hovering the mouse over the table header.

  • The API Shader Stage Control indicates which shader stages are active for the selected event.

  • Primitive, vertex, control point, and pixel counts

The ‘Duration’ shows the time from the start of the first shader to the end of the last shader, including any space between shaders where no actual work is done (denoted by a line connecting the shader ‘blocks’). The ‘Work duration’ only shows the time when the shaders are actually doing work. This is the sum of all the shader blocks, ignoring the connecting lines where no work is being done. If there is overlap between shaders, the overlap time is only accounted for once. If all shaders are overlapping, then the duration will be the same as the work duration.

If the user selects a range by clicking and dragging the mouse, the details side panel shows a summary of all the wavefront data contained in the selected region as shown below:

_images/rgp_details_panel_2.png

If the user selects a barrier, the details side panel will show information relating to the barrier, such as the barrier flags and any layout transitions associated with this barrier. It will also show the barrier type (whether it came from the application or the driver). Note that the barrier type is dependent on whether the video driver has support for this feature. If not, then it will be indicated as ‘N/A’. An example of a user-inserted barrier is shown below:

_images/rgp_details_panel_3.png

If the driver needed to insert a barrier, a detailed reason why this barrier was inserted is also displayed, as shown below:

_images/rgp_details_panel_5.png

If the user selects a layout transition, the details side panel will show information relating to the layout transition as shown below:

_images/rgp_details_panel_4.png

The user can also right-click on any event or overlay in the Events timeline view and navigate to the Event timing, Pipeline state, or Instruction timing pane, or to one of the panes in the Overview tab. The selected event or overlay will be shown in the chosen view.

In addition, the user can zoom into an event using the “Zoom to selection” option from this context menu.

Below is a screenshot of what the right-click context menu looks like.

_images/rgp_wavefront_occupancy_6.png

Wavefront occupancy customization

The Wavefront occupancy section of RGP is customizable. Users can hide and reorder the vertical position of views.

To hide a view, simply press the X button next to the view.

_images/rgp_occupancy_view_x_button.png

To show a hidden view, use the Views combo box in the top left of the tab.

_images/rgp_show_hidden_occupancy_view.png

The Views combo box can also be used to hide views.

To reorder a view’s vertical position within the tab, you can drag the view you want to reorder and drop it into a new position.

To do this, move the mouse above the drag button next to the view you want to move. A dashed blue rectangle will appear around the view to indicate which view will be dragged.

_images/rgp_occupancy_view_drag_button.png

Press, and hold, the drag button. A solid blue line will appear to indicate what the new position of the view will be after letting go of the mouse.

_images/rgp_occupancy_view_drop_indicator.png

The view will be dropped into its new position and remain there until you move it again. The Views combo box will be updated to reflect its new position.

_images/rgp_occupancy_view_new_position.png

The customization of the Wavefront occupancy section is treated like a normal RGP setting and persists upon closing and reopening RGP.

To return the Wavefront occupancy customization to its original state, press the Restore to default button in the top left of the tab.

_images/rgp_occupancy_view_restore_to_default.png

Note that the visibility state of the legends side panel is also saved.

Event timing

The event timing window shows a list of events and their corresponding timings. The tree view in the left-hand column shows each event name and its unique index, starting at 0, and are listed in sequential order. Events can be ordered into groups, and group categories are shown in bold text.

_images/rgp_event_timing_1.png

The pane to the right of the tree view shows a graphical representation of the duration for each event. The darker blue span to the right of each tree node shows the duration of all the events in that node.

In the graphic for each event (shown in light blue above) the first small block at the left is the CP marker, indicating when the event was issued. This is followed, some time later, by actual work done by the shaders. The delay between the CP marker and the start of actual work may indicate bottlenecks in the application. One of the shaders may be waiting for a resource which is currently being used by another wave in flight and cannot start until it obtains that resource. The time when the first shader started work and the last shader finished work is the number indicated in this column. Each shader stage is represented by a rectangular block. The longer the block, the longer the shader took to execute. Shaders are linked by a solid line to show that they are connected in the pipeline. For groups, a dark line spans all events within the group, showing the time taken for that group to complete work.

Zoom settings on this pane are similar to the Wavefront occupancy pane. More information can be found under the Zoom Controls section.

Grouping modes

The events can be grouped together. Normally these groups don’t affect the event ordering but sometimes can (sort by state bucket).

  • Group by pass will show events depending on the render target or attachment type (color, depth-only, compute, raytrace).

  • Group by hardware context will group events by their hardware context, making it easy to see which events caused the context to change.

  • Group by state bucket (unsorted) will order the events by state bucket but won’t sort the state buckets by duration. Theoretically, all events in a state bucket use the same shaders. The duration of a state bucket is represented by the dark blue line corresponding to the state bucket group text.

  • Group by state bucket (serialized) will take all the event timings within the group and sum the total time that the shaders were busy, ignoring all empty space between events. This has the effect of serializing the shader work and doesn’t take into account that some shaders will be executing in parallel. This is used to highlight when you have a lot of small shaders whose cumulative work can be extensive. As an example, if you have 2 shaders which start at the same time and one takes 2000 clks and another takes 10000 clks, the total duration would be 12000 clks.

  • Group by state bucket (overlapped) takes into account the parallelism of the shader execution so will highlight shaders which take a long time to execute. Using the same example above, since both shaders start together, the total duration in this case would be 10000 clks.

  • Group by command buffer will group events depending on which command buffer they are on.

  • Group by user events will group the events depending on which user event(s) they are surrounded by.

  • Group by PSO will group events by their API PSO hash values.

The default grouping mode is by user event if user events are present in the profile. Otherwise the default will be to group by pass.

Note that grouping by hardware context or command buffer will group events by queue first. Grouping by pass or user event will chronologically group events irrespective of which queue they originated from. Grouping by state bucket just shows events in the graphics queue. Grouping by hardware context is shown below:

_images/rgp_event_timing_2.png

Color modes

The events can be rendered using different color schemes in the same manner as in the Wavefront occupancy view.

The user can also right-click on any of the events and navigate to Wavefront occupancy or Pipeline state panes, as well as Barriers, Most expensive events and Context rolls panes within the Overview tab, and view the selected event in these panes, as well as in the side panels.

Wavefront occupancy and event timing window synchronization

Zooming of the time scale and horizontal panning of the Wavefront occupancy view and Event timing view can be synchronized or adjusted independently. More information on synchronization can be found under the Zoom Synchronization heading

The anatomy of an event

Two examples of typical draw call events are shown below:

_images/rgp_event_1.png _images/rgp_event_2.png

A shows the CP marker. This is the point the command processor in the GPU issues work to be done. It is then queued up until the GPU can process the workload.

B shows the work being done by the various shader stages. The gap between the CP marker and the start of B indicates that the GPU didn’t start on the workload straight away and was busy doing other things, for example, previous draw calls.

C shows any fixed-function work that needs doing after the shaders have finished executing. This occurs when a draw call is doing depth-only rendering. The fixed function work shown is the primitive assembly and scan conversion of the vertices shaded by the vertex shader.

Users may also obtain information about an event’s parent command buffer by right-clicking on an event. This will bring up a context menu which contains a menu item to find the event’s parent command buffer. Selecting this menu item will navigate to the Frame summary pane and set focus on the selected event’s parent command buffer. Once here, users can obtain valuable system-level insight about the surrounding context for the event in question.

Compute dispatches have a simpler structure. A sample compute event is shown below.

_images/rgp_compute_event.png

In a compute event, only compute shader waves are launched. Also, compute dispatches do not have any fixed function work after the shader work is finished.

Pipeline state

The pipeline state window shows the render state information for individual events by stage. In the example below the event is a DirectX12 DrawInstanced call using a VS, GS, and a PS. Active stages are rendered in black and can be selected, grey stages are inactive on this draw and cannot be selected.

The user has selected the PS stage for viewing and it is rendered in blue to indicate this. Below is a tabbed display to allow switching between a summary of the wavefront activity for this draw and the per-wavefront register resources used by the shader, and the shader ISA disassembly.

The register values indicate the number of registers that the shader is using. The value in parentheses is the number of registers that have been allocated for the shader.

From this information and knowledge about the RDNA or GCN architecture we can calculate the theoretical maximum wavefront occupancy for the pixel shader. In this case the maximum of 8 wavefronts per SIMD are theoretically possible, but may be limited by other factors.

_images/rgp_pipeline_state_1.png

Switching to the ISA tab will show the shader code at the ISA level. At the top, some general information will be given, such as the number of registers used and allocated and the various hash values for this event.

_images/rgp_pipeline_state_3.png

More information on the ISA tab can be found under the ISA View section.

Grouping modes

The grouping modes are the same as in the Event timing pane.

The user can also right-click on any of the events and navigate to the Wavefront occupancy or Event timing panes, as well as the Barriers, Most expensive events, Context rolls, Render/depth targets, and Pipelines panes within the Overview tab. The user can view the selected event in these panes, as well as in the side panels. Below is a screenshot of what the right-click context menu looks like.

_images/rgp_pipeline_state_2.png

Note: The Output Merger stage of a DirectX 12 application may report the LogicOp as D3D12_LOGIC_OP_COPY, even though it is set in an application as D3D12_LOGIC_OP_NOOP. These 2 operations are semantically the same if blending is enabled. A no-op indicates that no transform of the data is to be performed so the output is the same as the source.

Note: For OpenCL or HIP applications, the pipeline state does not show the graphics specific stages since they are not active during compute dispatches.

Raytracing events

For raytracing events, there are two possible compilation modes: Unified and Indirect. The AMD driver and compiler will choose the mode for each raytracing event. The compilation mode chosen for a particular event will be evident in the event name: events which use the Unified mode will have a <Unified> suffix, while events which use the Indirect mode will have an <Indirect> suffix. In the case of DirectX Raytracing, the full event names are DispatchRays<Unified> or ExecuteIndirect<Rays><Unified> and DispatchRays<Indirect> or ExecuteIndirect<Rays><Indirect>. For Vulkan, the full event names are vkCmdTraceRaysKHR<Unified> or vkCmdTraceRaysIndirectKHR<Unified> and vkCmdTraceRaysKHR<Indirect> or vkCmdTraceRaysIndirectKHR<Indirect>. The main difference between these two compilation modes has to do with how the individual shaders in the raytracing pipeline are compiled. In Unified mode, the individual shaders are inlined into a single shader, resulting in a single set of ISA. In Indirect mode, the individual shaders are compiled separately, and the functions in each shader end up as their own set of ISA instructions. Function call instructions are generated in the ISA to allow one function to call another. For the indirect mode, the overall occupancy of the event is affected by the resource usage of all shaders, even those that have a zero call count. Even if the shader function that uses the highest number of vector registers is not actually executed, the fact that it uses the most registers means that it could be the reason for lower overall occupancy for the event.

When selecting a raytracing event that uses the indirect compilation mode, the Pipeline state pane will look a bit different.

_images/rgp_pipeline_state_raytracing_1.png

There are three tabs available: Shader table, ISA, and Information.

The Shader table tab contains two main parts: an interactive flowchart representing the raytracing pipeline and a table containing the list of shader functions. Each shader function has an associated type. This type can be Ray generation, Traversal, Intersection, Any hit, Closest hit, Miss or Callable. The shader table lists each shader function, its type, resource usage statistics, instruction timing statistics, and both the API shader hash and the Internal pipeline hash. You can filter the table by shader type using the Shader types combo box. You can also filter the table by Export name using the Filter shaders… field. If you click on any hyperlinked text in the shader table, it will navigate to the ISA tab and show the ISA for the selected shader function. You can also use the right-click context menu to navigate to either the ISA tab or to the Instruction timing view.

If the Enable shader instrumentation checkbox was checked in Radeon Developer Panel when the profile was captured, the table will also include a column showing the number of average active lanes for each shader function, across all calls made to the function. The number of active lanes is sampled near the beginning of execution for each shader, giving an indication of the amount of thread divergence in the entire raytracing pipeline. When hovering the mouse over a cell in this column, a tooltip will be displayed to show the distribution of the number of active lanes for individual calls. This can give an indication of how many different execution paths through the pipeline were taken at runtime. Please note that enabling this setting in the Radeon Developer Panel may cause additional runtime overhead for the application that is being profiled.

_images/rgp_pipeline_state_raytracing_4.png

The flowchart gives a visual representation of the raytracing pipeline, as well as shows the relative percent cost of the shader functions in each stage. The percentage bars are color-coded as follows: Red indicates that a stage contains shaders that represent over 50% of the total cost for the event. Orange indicates that a stage contains shaders whose total cost is in the range of 10% to 50%, and green indicates that a stage’s total cost is less than 10%.

The flowchart also provides a quick way to filter the shader table. When a stage is clicked, the table will only show shader functions from that stage. You can filter more than one stage by holding down the CTRL key and clicking additional stages. Selected stages are shown as blue, unselected stages are shown as black, and disabled stages (stages with no corresponding shader functions) are shown as grey. You can remove all filters by clicking in any whitespace area in the flowchart.

Both the table and the flowchart will contain a full set of data for profiles captured with Instruction tracing enabled. For profiles captured without Instruction tracing, several columns in the table will show N/A instead of actual data. Similarly, the flowchart will not show the percent bars for profiles captured without Instruction tracing enabled.

The following screenshot shows how this view will look when Instruction timing data is not available.

_images/rgp_pipeline_state_raytracing_2.png

The ISA tab will also look different for raytracing events that use the indirect compilation mode. In addition to the normal ISA listing, there is also a drop down combo box that allows for viewing the ISA from a different shader function. For the selected shader function, the corresponding row from the shader table is also displayed for reference.

_images/rgp_pipeline_state_raytracing_3.png

Instruction timing

The Instruction timing pane shows the average issue latency of each instruction of a single shader. The instruction timing information is generated using hardware support on AMD RDNA and GCN GPUs. Generating instruction timing does not require recompilation of shaders or insertion of any instrumentation into shaders.

The Instruction timing pane shows RDNA or GCN ISA. For more details on the ISA, please refer the following resources:

  • The AMD GPU ISA Documentation on GPUOpen. These guides provide detailed definitions of the instructions you may see in RGP.

  • The User Guide for AMDGPU Backend as part of the LLVM User Guides. This guide provides details on some minor differences you may see in the Instruction timing pane versus what you might read in the ISA guides on GPUOpen. For instance some VALU instructions may appear with an extra suffix to provide more information on the instruction encoding. These suffixes, which are added by the LLVM-based AMDGPU disassembler, are described here.

The Instruction timing pane for a shader is shown below.

_images/rgp_instruction_timing_1.png

Latency

Each shader line in the Instruction timing view shows the time taken between the issue of an instruction and the one after that. To provide information on what Latency means some sample ISA statements are shown below.

Best Case Instruction Issue: In the below image, we see five instructions. The 1 clk denotes the latency between the issue of each of the instructions and the issue of the following instruction. This example shows an ideal performance case where each instruction is issued at an interval of 1 clock.

_images/rgp_instruction_timing_example_1.png

Delays in Instruction Issue: In the below image, we see four export instructions. The first exp instruction has a rather long interval of 4,162 clocks. This can be expected since the export instruction’s issue can be delayed for reasons such as unavailable memory resources which may be in use by other wavefronts. As a result, there is a long duration in the instruction. Since the latency waiting for memory resources was seen for the first export instruction, the subsequent exports, have a much shorter duration.

_images/rgp_instruction_timing_example_2.png

Waitcounts and Instruction Issue: In the below image, we see seven instructions. There are two scalar buffer loads and three scalar ALU instructions, all of which issue with little latency. We then see a s_waitcnt instruction. The s_waitcnt has a longer issue interval of 2,088 clocks. The short latencies of the previous s_buffer_load_dword instructions may seem counter intuitive since those are memory load instructions. However, this is expected as s_waitcnt is a shader instruction used for synchronization to wait for previous instructions, such as the previous buffer loads, to finish. The s_waitcnt instruction will issue and then wait (in this case 2,088 clocks) until the next instruction which is the v_add_f32_e64 can be issued.

_images/rgp_instruction_timing_example_3.png

By default, the Latency between any two instructions is an average of the latency measured per hit for that instruction. The latency can also be displayed as an average per wavefront or as a total across all wavefronts. This can be toggled using the normalization mode drop down shown below.

_images/rgp_instruction_timing_normalization_mode.png

The number of clock cycles shown for a given instruction is also represented by a bar. The length of the bar corresponds to the number of clock cycles worth of latency for an instruction. Different colors are used in the bars to indicate which parts of an instruction’s latency were hidden by work performed on other wave slots while the subsequent instruction was waiting to be issued on its slot. This can be seen in the image below.

_images/rgp_instruction_timing_latency_bars.png

Solid green indicates how much of a given instruction’s latency was hidden by VALU work. Solid yellow indicates how much latency was hidden by SALU or SMEM work. A diagonal hatch pattern made up of both green and yellow indicates how much latency was hidden by both VALU and SALU work. Finally, red indicates how much latency was not hidden by other work being done on the GPU. It is likely that bars with large red segments indicate a stall occurring while the shader is executing. When the mouse hovers over a row in the Latency column, a tooltip appears showing the exact breakdown of that instruction’s latency.

In the image above, the total latency of the instruction is 853 clocks. Of those 853 clocks, 209 clocks worth of latency are hidden by SALU work on other slots and 554 clocks worth of latency are hidden by VALU work. The 209 clocks where both SALU and VALU work was being done is shown using the hatch pattern. The segment between 209 and 554 clocks is shown as green since only VALU work is being done. The segment between 554 and 853 clocks is shown as red since there is no other work being done. Since there is more VALU work being done at the same time, green is more prevalent than yellow in this bar.

Contrast this with the image below, where an instruction is shown where more latency is hidden by SALU work. In this case, yellow is more prevalent than green.

_images/rgp_instruction_timing_latency_bars_2.png

A red indicator will be shown in the vertical scroll bar corresponding to the location of the instruction with the highest latency. This allows you to quickly find the hotspot within the shader.

Hit Count

The Hit count for each instruction shows the number of times the instruction was executed for the selected event. Any basic blocks that have a hit count of zero across all wavefronts in a shader will automatically be collapsed when viewing an event for the first time, as shown below.

_images/rgp_instruction_timing_disabled_and_collapsed_block.png

Basic blocks with a current hit count of zero based on the current latency range and latency selection mode will also be grayed out, as shown below.

_images/rgp_instruction_timing_disabled_block.png

Instruction Cost Percent

The Instruction Cost for each ISA instruction shows the percentage of the Total Issue Latency of the whole shader. For shaders with branches where consecutive instructions can have varying hit counts, the Instruction Cost incorporates the extra hit counts for that instruction. This allows us to find the hot-spot in the shader.

The Instruction Cost for an ISA instruction is calculated as follows:

Instruction Cost = 100 * (Sum of All Latencies for ISA Instruction) / (Sum of All Latencies for the shader)

Filtering wavefronts

By default the Latency, Hit count and Instruction cost values are calculated using all wavefronts that have been analyzed for a given shader. Information about the fastest wavefront and the slowest wavefront can also be displayed, providing insight into any outliers in terms of performance. The Wavefront latencies drop down (shown below) can be used to toggle between showing all wavefronts, the fastest wavefront and the slowest wavefront.

_images/rgp_instruction_timing_wavefront_latencies.png

It is also possible to filter which wavefronts are analyzed using the Wavefront Latencies Histogram (shown below).

_images/rgp_instruction_timing_wavefront_latencies_histogram.png

This histogram provides a visual representation of the full set of wavefronts for the current shader. The fastest wavefronts are on the left side of the histogram and the slowest wavefronts are on the right. Latency values increase moving from left to right. The height of each bar in the histogram gives a relative indication of how many wavefronts correspond to each set of latency values represented by the bars.

Below the histogram is a slider control that can be used to filter wavefronts. The two sliders allow you to specify a clock range for the wavefronts to analyze. Only wavefronts that fall into the specified range will contribute to the Latency, Hit count and Instruction cost percentage values displayed. If a range is set, the fastest in selection and slowest in selection filters will show information from the fastest and slowest waves within that range.

If all analyzed wavefronts have the same total latency, the histogram will be hidden, as all wavefronts would end up in a single bucket. Because of this, the histogram is hidden when there is only a single wavefront analyzed for the selected shader. Any time the histogram is hidden, the Wavefront latencies drop down and the Timeline in the Wavefront statistics section of the side panel will also be hidden.

Instruction Timing Capture Granularity

Instruction timing information is generated for the whole RGP profile, but data is limited to a single shader engine. Only waves executed by a single shader engine contribute to the hit counts and timing information shown in the Instruction timing pane. Please see the Radeon Developer Panel documentation for more information on how to capture instruction timing information.

To view all the events that have instruction timing information, the developer can choose the “Color by instruction timing” option in the Wavefront occupancy or the Event timing views.

Availability of Instruction Timing

In certain cases it is possible that the instruction timing information may not be available for all events. The main reasons why instruction timing information may not be present for an event are described below.

Hardware Architecture and Draw Scheduling: Instruction timing information is only sampled from some of the compute units on a single shader engine of the GPU. As a result, it is possible for events with very few waves to not have instruction data. This can happen if the GPU schedules the waves on a shader engine or compute unit that doesn’t have instruction trace enabled.

Internal Events: It should be noted that it is not possible to view instruction timing information for internal events such as Clear().

Navigation

The instruction timing for an event can be accessed by right clicking on that event and choosing the “View In Instruction timing” option. Since it is common to use the same shader in multiple events, RGP provides an easy way to toggle between multiple events that use the same shader using the event drop down shown below.

_images/rgp_instruction_timing_2.png

This allows the developer to study the behavior of the shader for different events. It is recommended to use the keyboard shortcuts, (Shift + Up and Shift + Down) to change the API PSO selection and (Shift + Left and Shift + Right) to move across different events using the same shader. The API Shader Stage Control indicates which shader stages are active for the selected event. When an active stage is clicked, the Instruction timing pane will update to show the timing data for the selected shader stage.

Navigation of Raytracing events

For certain Raytracing events, an additional Export name drop down will be available. Whether or not this drop down is shown depends on the compilation mode chosen by the AMD driver and compiler for the selected event. There are two possible compilation modes: Unified and Indirect. The compilation mode chosen for a particular event will be evident in the event name: events which use the Unified mode will have a <Unified> suffix, while events which use the Indirect mode will have an <Indirect> suffix. In the case of DirectX Raytracing, the full event names are DispatchRays<Unified> or ExecuteIndirect<Rays><Unified> and DispatchRays<Indirect> or ExecuteIndirect<Rays><Indirect>. For Vulkan, the full event names are vkCmdTraceRaysKHR<Unified> or vkCmdTraceRaysIndirectKHR<Unified> and vkCmdTraceRaysKHR<Indirect> or vkCmdTraceRaysIndirectKHR<Indirect>. The main difference between these two compilation modes has to do with how the individual shaders in the raytracing pipeline are compiled. In Unified mode, the individual shaders are inlined into a single shader, resulting in a single set of ISA. In Indirect mode, the individual shaders are compiled separately, and the functions in each shader end up as their own set of ISA instructions. Function call instructions are generated in the ISA to allow one function to call another.

The way the ISA code is presented in the Instruction timing view follows the way the driver and compiler handle the shaders. For Unified mode, there is a single stream of ISA and the Instruction timing view treats it as a single shader. For Indirect mode, there are multiple streams of instructions, one for each shader in the raytracing pipeline. The instruction streams and their associated costs are displayed per-shader and appear one after the other in the Instruction timing view. Only shader functions with non-zero cost are displayed in the Instruction timing view. Shaders with zero cost can still be viewed in the Pipeline state pane.

To help with navigation among the various shader functions, the Export name drop down is available for any events that use the indirect compilation mode. This drop down allows the developer to toggle between the multiple shaders. The drop down contains the list of exports along with their Instruction cost. The exports will be sorted by the Instruction cost. Ctrl + Shift + Up and Ctrl + Shift + Down can be used to move among the list of Export names. This Export name drop down is shown below.

_images/rgp_instruction_timing_exports.png

Navigation in Compute profiles

In profiles collected for OpenCL or HIP applications, the navigation controls are slightly different. Instead of the API PSO drop down, there is a event name/kernel name drop down. This drop down contains an entry for each unique kernel dispatch found in the profile. Once an event name or kernel name is selected, the Event drop down can be used to choose between events that dispatch the selected kernel. The API Shader Stage Control is not available in Compute profiles. Keyboard shortcuts can be used to cycle through the available kernel names (Shift + Up and Shift + Down) and to move across different events using the selected kernel (Shift + Left and Shift + Right). The navigation controls for a Compute profile are shown below.

_images/rgp_instruction_timing_3.png

More information on some of the features available in the Instruction timing pane can be found under the ISA View section.

Instruction Timing Side Panel

The Instruction timing side panel provides additional information about the shader shown.

_images/rgp_instruction_side_panel.png

The main sections in the side panel are:

Identifiers: This section includes multiple hashes that can be used to identify the shaders used and the pipeline that they are a part of.

Wavefront Statistics: The wavefront statistics provide information about the selected range of wavefronts. As such, the information displayed depends on both the selected mode in the Wavefront latencies drop down as well as the range selected in the Wavefront Latencies Histogram.

The Timeline provides a visual representation of when the selected wavefronts were executed. When the Histogram is used to limit the range of wavefronts, the Timeline is updated such that waves that do not fall within the specified range are displayed as grey. Only waves that fall within the range are displayed as blue. This allows you to see where particular waves were executed. For instance, it might be expected that slower waves were executed early on if, for instance, memory caches were not yet warm. Using the Timeline in conjunction with the Histogram can help determine where a bottleneck might be.

The Branches table denotes the number of branch instructions in the shader and the percentage of the total number of branches that were taken by the shader.

The Instruction Types table provides information about the dynamic instruction mix of the shader’s execution. The columns denote the different types of instructions supported by RDNA and GCN. The counts denote the number of instructions of each category.

Each category’s count denote the instruction count for that shader’s invocation in the event. Different executions of the same shader could have different Instruction statistics based on factors such as the number of wavefronts launched for the shader and loop parameters. The instruction categories are briefly described below. Please see the AMD GPU ISA Documentation for more details.

  • VALU: Includes vector ALU instructions

  • SALU: Includes scalar ALU instructions

  • VMEM: Includes vector memory and flat memory instructions

  • SMEM: Includes scalar memory instructions

  • LDS: Includes Local Data Share instructions

  • IMMEDIATE: Includes the immediate instructions such as s_nop and s_waitcnt

  • EXPORT: Includes export instructions

  • MISC: Includes other miscellaneous instructions such as s_endpgm

  • RAYTRACE: Includes the BVH instructions used during raytracing. Only shown when viewing profiles captured on a GPU that supports ray tracing

  • WMMA: Includes the WMMA instructions used during wave matrix multiply accumulate operations. Only shown when viewing profiles captured on a GPU that supports WMMA instructions

The instruction types table provides a useful summary of the shader’s structure especially for very long shaders.

Hardware Utilization: The Hardware utilization bar charts show the utilization of each functional unit of the GPU on a per-shader basis.

It should be noted that utilization shown is only for the shader being viewed. For example, in the image shown, the VALU utilization of the shader is 67.6%. This means that the Raytracing shader shown used 67.6% of the VALU capacity of the GPU. Other shaders may be concurrently executing on the GPU. Their usage of the VALU is not considered when showing the bar charts.

A functional unit’s utilization is calculated as follows:

Utilization % = 100 * (Hit Count of all instructions executed on the functional unit) / (Duration of analyzed wavefronts)

Shader Statistics: The shader statistics section provides useful information about the shader

  • Shader Duration: This denotes the execution duration of the whole shader. It can be correlated with the timings seen for the same shader in other RGP views such as the Wavefront occupancy and the Event timing views.

  • Wavefronts: This denotes the total number of wavefronts in the shader and the number of wavefronts analyzed as part of building the instruction timing visualizations. It is expected that not all waves in the shader will be analyzed. This is for the same reasons described above when discussing the availability of instruction timing.

  • Theoretical Occupancy: From the register information and knowledge about the GPU architecture we can calculate the theoretical maximum wavefront occupancy for the shader.

  • Vector and Scalar Registers: The register values indicate the number of registers that the shader is using. The value in parentheses is the number of registers that have been allocated for the shader.

  • Local Data Share Size: This value indicates how many bytes of local data share are used by the shader. This is only displayed for Compute Shaders.

Call Targets: While viewing data for a shader that calls other functions, a Call targets list is displayed in the side panel whenever a “s_swappc” or “s_setpc” instruction with a non-zero hit count is selected. In the ISA view, a glyph is displayed next to any such instruction. For a “s_swappc” instruction, the Call targets list shows the names of the exports that control may jump to, along with a hit count indicating how many times each target was called. For a “s_setpc” instruction, the Call targets list shows the name of the export that control will return to. This feature is currently supported for pipelines used by <Indirect> raytracing events as well as for HIP kernels that call additional functions in their execution.

_images/rgp_instruction_timing_call_targets.png

Instruction Timing for RDNA

On RDNA GPUs, instruction timing can include certain instructions with a hit count of 0. Usually this will be an instruction called s_code_end and may also be present after the shader’s s_endpgm instruction. This is expected since this is an instruction added by the compiler to allow for instruction prefetching or for padding purposes. The hardware does not execute this instruction.

Such instructions may also be present in the ISA view in the Pipeline state pane.

API Shader Stage Control

Several views in RGP provide information about which API shader stages are active for a particular event or pipeline. This information is represented by the API Shader Stage control.

NOTE: This control is only available for DirectX and Vulkan profiles.

This control appears in the Most expensive events and Pipelines Overview panes, as well as in the Details side panel in the Wavefront occupancy and Event timings panes, and in the toolbar area of the Instruction timing pane.

Here are examples of what the control looks like for a few different DirectX12 and Vulkan pipelines.

DirectX12 pipeline with the VS and PS stages active:

_images/rgp_dx12_pipeline_stage_vs_ps.png

DirectX12 pipeline with the VS, HS, DS and PS stages active:

_images/rgp_dx12_pipeline_stage_vs_hs_ds_ps.png

DirectX12 pipeline with the VS, GS and PS stages active:

_images/rgp_dx12_pipeline_stage_vs_gs_ps.png

DirectX12 pipeline with the CS stages active:

_images/rgp_dx12_pipeline_stage_cs.png

DirectX12 pipeline with the RT stages active:

_images/rgp_dx12_pipeline_stage_rt.png

Vulkan pipeline with the VS and FS stages active:

_images/rgp_vk_pipeline_stage_vs_fs.png

Vulkan pipeline with the CS stages active:

_images/rgp_vk_pipeline_stage_cs.png

Vulkan pipeline with the RT stages active:

_images/rgp_vk_pipeline_stage_rt.png

This control can also indicate when a particular shader stage contains inline ray tracing. When this is detected, a stage will indicate this with a gradient red pattern painted in that stage’s box. Here is an example of a DirectX12 pipeline where the compute shader performs inline ray tracing:

_images/rgp_dx12_pipeline_stage_cs_with_inline_rt.png

ISA View

Several views in RGP display ISA for API shader stages. ISA is displayed for a single shader stage at a time using the same color coding scheme and tree structure.

ISA views appear in the Pipeline state pane and in the Instruction timing pane.

_images/rgp_isa_view_1.png

Basic blocks can be expanded and collapsed individually or all at once. To expand or collapse a single block, click on the arrow on the left side of the instruction line. To expand or collapse all blocks in a shader at once, use the (Ctrl + Right) or (Ctrl + Left) shortcut, respectively.

_images/rgp_isa_view_blocks_collapsed.png

Tokens can be selected and highlighted to see other instances of the selected token (instruction opcodes, registers and constants).

_images/rgp_isa_view_token_selected_and_highlighted.png

Basic blocks referenced by a branch instruction(s) can be clicked to scroll to the branch instruction(s). Similarly, the block referenced in the branch instruction can be clicked to scroll to the block. Branch navigations are recorded and can be replayed using the navigation history.

_images/rgp_isa_view_branch_navigation_history.png

Columns can be customized by using the Viewing Options dropdown to show or hide them. They can also be rearranged by clicking on the column header and dragging them to a new location.

_images/rgp_isa_view_customize_columns.png

Text in any column can be searched for and the developer can navigate directly to a specific line using the controls displayed below.

_images/rgp_instruction_timing_find.png

Both the Search command (Ctrl + F) and the Go to line command (Ctrl + G) can be invoked using keystrokes.

Instruction lines that match the search results are highlighted. The vertical scroll bar will also indicate the location of all matches, giving you a visual indicator of where in the shader the various matches can be found.

_images/rgp_isa_view_search_results.png

The display of line numbers can be toggled using a keyboard shortcut (Ctrl + Alt + L).

Zoom Controls

Time based graphs in RGP provide Zoom controls for adjusting the time scale that is viewable on screen. The following set of zoom icons are displayed above each graph that supports zooming:

ZoomSelectionRef Zoom to selection

When Zoom to selection is clicked, the zoom level is increased to a selected region or selected event. A selection region is set by holding down the left mouse button while the mouse is on the graph and dragging the mouse either left or right. A colored overlay will highlight the selected region on the graph. For graphs that support it, an event may be selected by clicking on it with the mouse (either the left or right button). Zoom to selection can also be activated by right clicking on a selection on the graph and choosing the Zoom to selection context menu option. Zooming to a selected event can be accomplished by simply double clicking the event. Pressing the Z shortcut key while holding down the CTRL key activates Zoom to selection as well.

ZoomResetRef Zoom reset

When Zoom reset is clicked, the zoom level is returned to the original level to reveal the entire time span on the graph. The zoom level can also be reset using the H shortcut key.

ZoomInRef Zoom in

Increases the zoom level incrementally to display a shorter time span on the graph. The zoom level is increased each time this icon is clicked until the maximum zoom level is reached. Alternatively, holding down the CTRL key and scrolling the mouse wheel up while the mouse pointer is over the graph will also zoom in for a more detailed view. Zooming in can be activated with the A shortcut key. To zoom in quickly at a 10x rate, press the S shortcut key.

ZoomOutRef Zoom out

Decreases the zoom level incrementally to display a longer time span on the graph. The zoom level is decreased each time this icon is clicked until the minimum zoom level is reached (i.e. the full available time region). Alternatively, holding down the CTRL key and scrolling the mouse wheel down while the mouse pointer is over the graph will also zoom out for more detailed view. Zooming out can be activated with the Z shortcut key. To zoom out quickly at a 10x rate, press the X shortcut key.

Zoom Panning

When zoomed in on a graph region, the view can be shifted left or right by using the horizontal scroll bar. The view can also be scrolled by dragging the mouse left or right while holding down the spacebar and the left mouse button. Left and right arrow keys can be used to scroll as well.

Synchronized Zoom

Normally, adjusting the view of a time based graph (by zooming in and scrolling) doesn’t affect graphs on other panes. This can be useful in some cases when tracking more than one item. However, it is sometimes useful to lock both the event timing and wavefront occupancy views to the same visible time window. There is an option to control this in the ‘General’ tab of the Settings section called Sync event time windows. With this enabled, any zooming and scrolling in one window will be reflected in the other. If adjustments are made in the wavefront occupancy view, the vertical scroll bar in the event timing view will be automatically adjusted so that there are always events shown on screen if an event isn’t manually selected.

User Debug Markers

User markers can help application developers to correlate the data seen in RGP with their application behavior. User Markers are currently not supported for OpenCL or HIP.

DirectX12 User Markers

For DirectX12, there are two recommended ways to instrument your application with user markers that can be viewed within RGP:

  1. using Microsoft® PIX3 event instrumentation, or

  2. using the debug marker support in AMD GPU Services (AGS) Library.

Using PIX3 event instrumentation for DirectX12 user debug markers

If your application has been instrumented with PIX3 user markers, then to view the markers within RGP is a simple matter of recompiling the source code of the application with a slightly modified PIX header file. The steps described here require a WinPixEventRuntime version of at least 1.0.200127001.

The PIX3 event instrumentation functions supported by RGP are:

void PIXBeginEvent(ID3D12GraphicsCommandList* commandList, ...)
void PIXEndEvent(ID3D12GraphicsCommandList* commandList)
void PIXSetMarker(ID3D12GraphicsCommandList* commandList, ...)

The steps to update the PIX header file are:

1. Copy the entire samples\AmdDxExt folder provided in the RGP package to the location where the PIX header files (pix3.h, pix3_win.h) reside (typically at WinPixEventRuntime.[x.x]\Include\WinPixEventRuntime).

  1. Add #include "AmdDxExt\AmdPix3.h" to the top of PIXEvents.h:

When using WinPixEventRuntime version 1.0.210209001 or newer:

#if defined(USE_PIX) || !defined(PIX_XBOX)
  #define PIX_CONTEXT_EMIT_CPU_EVENTS

  #ifndef PIX_XBOX
    #include "AmdDxExt\AmdPix3.h"
    #define PIX_AMD_EXT
  #endif
#endif

When using WinPixEventRuntime version 1.0.200127001:

#include "PIXEventsCommon.h"

#if defined(XBOX) || defined(_XBOX_ONE) || defined(_DURANGO)
# define PIX_XBOX
#else
#include "AmdDxExt\AmdPix3.h"
#endif

3. Update the PIXEvents.h file to add an Rgp prefix to the the existing calls to PIXBeginEventOnContextCpu, PIXEndEventOnContextCpu and PIXSetMarkerOnContextCpu:

When using WinPixEventRuntime version 1.0.231030001 or newer:

#ifdef PIX_CONTEXT_EMIT_CPU_EVENTS
#ifdef PIX_AMD_EXT
  RgpPIXBeginEventOnContextCpu(destination, eventSize, context, color, formatString, args...);
#else
  PIXBeginEventOnContextCpu(destination, eventSize, context, color, formatString, args...);
#endif
#endif
#ifdef PIX_CONTEXT_EMIT_CPU_EVENTS
#ifdef PIX_AMD_EXT
  RgpPIXSetMarkerOnContextCpu(destination, eventSize, context, color, formatString, args...);
#else
  PIXSetMarkerOnContextCpu(destination, eventSize, context, color, formatString, args...);
#endif
#endif
#ifdef PIX_CONTEXT_EMIT_CPU_EVENTS
#ifdef PIX_AMD_EXT
  RgpPIXEndEventOnContextCpu(destination, context);
#else
  destination = PIXEndEventOnContextCpu(context);
#endif
#endif

When using WinPixEventRuntime version 1.0.210209001 up to 1.0.230302001:

#ifdef PIX_CONTEXT_EMIT_CPU_EVENTS
#ifdef PIX_AMD_EXT
  RgpPIXBeginEventOnContextCpuLegacy(context, color, formatString, args...);
#else
  PIXBeginEventOnContextCpu(context, color, formatString, args...);
#endif
#endif
#ifdef PIX_CONTEXT_EMIT_CPU_EVENTS
#ifdef PIX_AMD_EXT
  RgpPIXSetMarkerOnContextCpuLegacy(context, color, formatString, args...);
#else
  PIXSetMarkerOnContextCpu(context, color, formatString, args...);
#endif
#endif
#ifdef PIX_CONTEXT_EMIT_CPU_EVENTS
#ifdef PIX_AMD_EXT
  RgpPIXEndEventOnContextCpuLegacy(context);
#else
  PIXEndEventOnContextCpu(context);
#endif
#endif

When using WinPixEventRuntime version 1.0.200127001:

#if PIX_XBOX
  PIXBeginEvent(color, formatString, args...);
#else
#ifdef PIX_AMD_EXT
  RgpPIXBeginEventOnContextCpuLegacy(context, color, formatString, args...);
#else
  PIXBeginEventOnContextCpu(context, color, formatString, args...);
#endif
#endif
#if PIX_XBOX
  PIXEndEvent();
#else
#ifdef PIX_AMD_EXT
   RgpPIXEndEventOnContextCpuLegacy(context);
#else
   PIXEndEventOnContextCpu(context);
#endif
#endif
#if PIX_XBOX
  PIXSetMarker(color, formatString, args...);
#else
#ifdef PIX_AMD_EXT
  RgpPIXSetMarkerOnContextCpuLegacy(context, color, formatString, args...);
#else
  PIXSetMarkerOnContextCpu(context, color, formatString, args...);
#endif
#endif

4. Recompile the application. Note that the RGP user markers are only enabled when the corresponding PIX event instrumentation is also enabled with one of the preprocessor symbols: USE_PIX, DBG, _DEBUG, PROFILE, or PROFILE_BUILD.

The PIX3 event instrumentation within the application continues to be usable for Microsoft PIX tool without additional side effects or overhead.

To find a more complete description of how to use the PIX event instrumentation, refer to https://blogs.msdn.microsoft.com/pix/winpixeventruntime/.

See many examples of using PIX event instrumentation at https://github.com/Microsoft/DirectX-Graphics-Samples.

Using AGS for DirectX12 user debug markers

The AMD GPU Services (AGS) library provides software developers with the ability to query AMD GPU software and hardware state information that is not normally available through standard operating systems or graphic APIs. AGS includes support for querying graphics driver version info, GPU performance, CrossFire™ (AMD’s multi-GPU rendering technology) configuration info, and Eyefinity (AMD’s multi-display rendering technology) configuration info. AGS also exposes the explicit Crossfire API extension, GCN shader extensions, and additional extensions supported in the AMD drivers for DirectX® 11 and DirectX 12. One of the features in AGS is the support for DirectX 12 user debug markers.

User markers can be inserted into your application using AGS function calls. The inserted user markers can then be viewed within RGP. The main steps to obtaining user markers are described below.

Articles and blogs about AGS can be found here: https://gpuopen.com/amd-gpu-services-ags-library/

Additional API documentation for AGS can be found at: https://gpuopen-librariesandsdks.github.io/ags/

Download AGS

Download the AGS library from: https://github.com/GPUOpen-LibrariesAndSDKs/AGS_SDK/

The library consists of pre-built Windows libraries, DLLs, sample and documentation. You will need to use files in the following two dirs.

  • Headers: AGS_SDK-master\ags_lib\inc

  • Libraries: AGS_SDK-master\ags_lib\lib

Integrate AGS header, libs, and DLL into your project

AGS requires one header (amd_ags.h) to be included in your source code. Add the location of the AGS header to the Visual Studio project settings and include the header in the relevant code files.

#include “amd_ags.h”

Link your exe against correct AGS library for your project (32 or 64bit, MD or MT static library, debug or release, or DLL).

Library Name

AGS Runtime DLL required

Library Type

64 Bit

amd_ags_x64.lib

amd_ags_x64.dll

DLL

amd_ags_x64_2015_MD.lib

NA

VS2015 Lib (multithreaded dll runtime library)

amd_ags_x64_2015_MT.lib

NA

VS2015 Lib (multithreaded static runtime library)

amd_ags_x64_2015_MDd.lib

NA

VS2015 Lib (debug multithreaded dll runtime library)

amd_ags_x64_2015_MTd.lib

NA

VS2015 Lib (debug multithreaded static runtime library)

amd_ags_x64_2017_MD.lib

NA

VS2017 Lib (multithreaded dll runtime library)

amd_ags_x64_2017_MT.lib

NA

VS2017 Lib (multithreaded static runtime library

amd_ags_x64_2017_MDd.lib

NA

VS2017 Lib (debug multithreaded dll runtime library)

amd_ags_x64_2017_MTd.lib

NA

VS2017 Lib (debug multithreaded static runtime library)

amd_ags_x64_2019_MD.lib

NA

VS2019 Lib (multithreaded dll runtime library)

amd_ags_x64_2019_MT.lib

NA

VS2019 Lib (multithreaded static runtime library

amd_ags_x64_2019_MDd.lib

NA

VS2019 Lib (debug multithreaded dll runtime library)

amd_ags_x64_2019_MTd.lib

NA

VS2019 Lib (debug multithreaded static runtime library)

32 Bit

amd_ags_x86.lib

amd_ags_x86.dll

DLL

amd_ags_x86_2015_MD.lib

NA

VS2015 Lib (multithreaded dll runtime library)

amd_ags_x86_2015_MT.lib

NA

VS2015 Lib (multithreaded static runtime library)

amd_ags_x86_2015_MDd.lib

NA

VS2015 Lib (debug multithreaded dll runtime library)

amd_ags_x86_2015_MTd.lib

NA

VS2015 Lib (debug multithreaded static runtime library)

amd_ags_x86_2017_MD.lib

NA

VS2017 Lib (multithreaded dll runtime library)

amd_ags_x86_2017_MT.lib

NA

VS2017 Lib (multithreaded static runtime library)

amd_ags_x86_2017_MDd.lib

NA

VS2017 Lib (debug multithreaded dll runtime library)

amd_ags_x86_2017_MTd.lib

NA

VS2017 Lib (debug multithreaded static runtime library)

amd_ags_x86_2019_MD.lib

NA

VS2019 Lib (multithreaded dll runtime library)

amd_ags_x86_2019_MT.lib

NA

VS2019 Lib (multithreaded static runtime library

amd_ags_x86_2019_MDd.lib

NA

VS2019 Lib (debug multithreaded dll runtime library)

amd_ags_x86_2019_MTd.lib

NA

VS2019 Lib (debug multithreaded static runtime library)

Initialize AGS

When you have your project building the first thing to do is to initialize the AGS context.

// Specify AGS configuration (optional memory allocation callbacks)
AGSConfiguration config = {};

// Initialize AGS
AGSReturnCode agsInitReturn = agsInitialize(AGS_MAKE_VERSION(AMD_AGS_VERSION_MAJOR, AMD_AGS_VERSION_MINOR, AMD_AGS_VERSION_PATCH), &config, &m_AGSContext, &m_AmdgpuInfo);

// Report error on AGS initialization failure
if (agsInitReturn != AGS_SUCCESS)
{
        printf("\\nError: AGS Library was NOT initialized - Return Code %d\\n", agsInitReturn);
}

Initialize the DirectX12 Extension

Once the AGS extension has been successfully created we need to create the DirectX12 extension as follows:

// Create the device using AGS
AGSDX12DeviceCreationParams dxCreateParams = {hardwareAdapter.Get(), __uuidof(ID3D12Device), D3D_FEATURE_LEVEL_11_0};
AGSDX12ReturnedParams dxReturnedParams;
AGSReturnCode dxInitReturn = agsDriverExtensionsDX12_CreateDevice(m_AGSContext, &dxCreateParams, nullptr, &dxReturnedParams);

// Report error on AGS DX12 device creation failure
if (dxInitReturn != AGS_SUCCESS)
{
        printf("Error: AGS DX12 extension could not create a device - Return Code %d\n", agsInitReturn);
}
else
{
        printf("AGS DX12 device was created.\n");
        m_device = dxReturnedParams.pDevice;

        // Check whether user markers are supported by the current driver
        if (dxReturnedParams.extensionsSupported.userMarkers == 1)
        {
                printf("AGS_DX12_EXTENSION_USER_MARKERS are supported.\n");
        }
        else
        {
                printf("AGS_DX12_EXTENSION_USER_MARKERS are NOT supported.\n");
        }
}

Please note that the above code checks if the driver is capable of supporting user markers by looking at the extensions supported by the driver. This step may fail on older drivers.

Insert Markers in Application

The main functions provided by AGS for marking applications are:

agsDriverExtensionsDX12_PushMarker;
agsDriverExtensionsDX12_PopMarker;
agsDriverExtensionsDX12_SetMarker;

The below example shows how a draw call can be enclosed within a “Draw Particles” user marker, followed by inserting a marker.

// Push a marker
agsDriverExtensionsDX12_PushMarker(m_AGSContext, pCommandList, "DrawParticles");

// This draw call will be in the "Draw Particles" User Marker
pCommandList->DrawInstanced(ParticleCount, 1, 0, 0);

// Pop a marker
agsDriverExtensionsDX12_PopMarker(m_AGSContext, pCommandList);

// Insert a marker
agsDriverExtensionsDX12_SetMarker(m_AGSContext, pCommandList, "Finished Drawing Particles");

Vulkan User Markers

Debug Marker Extension

Vulkan has support for user debug markers using the VK_EXT_debug_marker extension. Please read the following article for details:

https://www.saschawillems.de/blog/2016/05/28/tutorial-on-using-vulkans-vk_ext_debug_marker-with-renderdoc/

See code sample at:

https://github.com/SaschaWillems/Vulkan/blob/master/examples/debugmarker/debugmarker.cpp

Debug Utils Extension

The debug marker extension VK_EXT_debug_marker has been replaced with a new extension VK_EXT_debug_utils that provides additional support to narrow down the location of a debug message in complicated applications. The following document describes the capabilities of the new extension.

https://www.lunarg.com/wp-content/uploads/2018/05/Vulkan-Debug-Utils_05_18_v1.pdf

Both VK_EXT_debug_marker and VK_EXT_debug_utils extensions are supported in RGP. Inserting user markers via these extensions should generate user events in your RGP profile which you can visualize.

Viewing User Markers

The RGP file captured for a frame of the above application contains many user markers. The user markers can be seen in the “Event timing” and “Pipeline state” views when you choose the “Group by User Marker” option as shown below.

_images/rgp_user_markers_1.png

“Draw Particles” User marker with the draw calls enclosed in the User Marker

User markers can also be seen in the wavefront occupancy view when you color by user events. Coloring by user events is also possible in the event timing view. As seen below, any events enclosed by the same user marker will be shown with the same color. Any events not enclosed by user markers are shown in grey. The coloration is only affected by the Push/PopMarker combination; the SetMarker has no effect on the user event color since these markers simply mark a particular moment in time.

Additionally, the user event names are displayed in an Overlay at the top of the event timeline view.

_images/rgp_user_markers_2.png

The full user event hierarchy is also visible on the third line of the side pane when clicking on individual events. If the event does not contain a user event hierarchy, nothing will be shown.

_images/rgp_user_markers_3.png

Events enclosed by user markers are colored in the wavefront occupancy view. They are also visible in the side panel.

RenderDoc & Radeon GPU Profiler interop BETA

In addition to the typical use case where RGP profiles are generated using Radeon Developer Panel, profiles can also be generated using RenderDoc. When an RGP profile is generated by RenderDoc, events can be correlated across both tools. This feature is only supported for DirectX12 and Vulkan.

Intended usage

When RenderDoc replays a captured frame, there are expected differences in performance when compared to a normal run of the application. Therefore, when a profile is generated by RenderDoc, the overall profile data may not accurately reflect the true performance of the application. For a more accurate representation of the overall application performance, a profile should be captured directly from the application using the Radeon Developer Panel.

The profile data generated from a RenderDoc capture, along with the supported interoperability features, can be useful in helping to determine which elements of a frame consume the most GPU time. Therefore, users are encouraged to leverage both methods of generating profile data when analyzing performance.

Obtaining a profile from RenderDoc

First, load RenderDoc and capture a frame as usual. When loading the capture into RenderDoc, make sure to use the Open Capture with Options menu item (under the main File menu in the RenderDoc user interface) and set the Replay optimisation level setting to Fastest. Without this setting, extra events that were not in the original frame may appear in the profile, as RenderDoc may insert extra events as part of its other replay levels. There may also be some RenderDoc captures that are unable to generate a profile if another replay level is used.

_images/rgp_rdc_interop_6.png

The Replay optimisation level setting can also be set globally for all captures in RenderDoc’s Settings dialog:

_images/rgp_rdc_interop_7.png

Next, make sure that the Core settings are configured to allow Radeon GPU Profiler Integration:

_images/rgp_rdc_interop_8.png

Finally, create a new profile for the loaded capture as shown below:

_images/rgp_rdc_interop_1.png

This will kick off the profiling process, which will embed a new profile into the RenderDoc capture file. If this is the first time doing this, RenderDoc will bring up a prompt to allow specification of a path to Radeon GPU Profiler. Once profiling is complete, RenderDoc will launch Radeon GPU Profiler and the new profile will be ready for analysis.

Known limitations

  • Users may correlate GPU work (draws/dispatches) across both tools. Note that this excludes entry points such as copies, barriers, clears, and indirect draw/dispatch.

  • Since the RenderDoc replayer serializes entry points, generated profiles could appear CPU bound. This can be seen as gaps in the wavefront occupancy view, which may not be present when obtaining the profile using Radeon Developer Panel.

  • Creating consecutive RGP profiles from the same RenderDoc instance sometimes fails. This occurs if users obtain multiple RenderDoc captures of the same application prior to triggering a second profile. To work around this, start a fresh instance of RenderDoc with the desired capture to profile.

  • In some cases profiles originating from RenderDoc contain no GPU events. To work around this, repeat the profiling process again via “Tools –> Create new RGP Profile”

  • The System Activity view for a RenderDoc profile will likely mismatch that of a native profile. This is due to different command buffer submission patterns between the replayer and native application.

  • Vulkan-specific: During image creation, RenderDoc sometimes forces additional usage/flags that may have not been present as per the native application. This effectively disables hardware tiling optimizations which are by default enabled during native app runtime.

  • Vulkan-specific: The RenderDoc replayer does not support playback of compute work on the async compute queue. This means that the profile will show all compute work running on the graphics queue.

  • Vulkan-specific: In some cases native profiles will contain color/depth clears which may not be present in the RenderDoc profile.

  • DX12-specific: The RenderDoc replayer will sometimes inject CopyBufferRegion calls as part of an optimization to Map/Unmap. These will be visible as tall spikes of compute work in the wavefront occupancy view.

  • If an RGP profile opened by RenderDoc is left running and RenderDoc is restarted, the interop connection between the two can’t be re-established. In this case, the “Create new RGP Profile” menu option will remain disabled after opening a new RenderDoc capture. This is caused by a named pipe having been left open. To resolve the issue, close RGP, and then restart RenderDoc. On Linux®, a similar situation can occur if the RenderDoc process does not shutdown cleanly. If this occurs, it may be necessary to wait a few minutes for the connection to be removed before restarting RenderDoc. The following command can be executed from a terminal window to determine if the named pipe is still opened:

    • netstat -p | grep “AMD”

Disclaimer

The information contained herein is for informational purposes only, and is subject to change without notice. While every precaution has been taken in the preparation of this document, it may contain technical inaccuracies, omissions and typographical errors, and AMD is under no obligation to update or otherwise correct this information. Advanced Micro Devices, Inc. makes no representations or warranties with respect to the accuracy or completeness of the contents of this document, and assumes no liability of any kind, including the implied warranties of noninfringement, merchantability or fitness for particular purposes, with respect to the operation or use of AMD hardware, software or other products described herein. No license, including implied or arising by estoppel, to any intellectual property rights is granted by this document. Terms and limitations applicable to the purchase or use of AMD’s products are as set forth in a signed agreement between the parties or in AMD’s Standard Terms and Conditions of Sale.

AMD, the AMD Arrow logo, Radeon, Ryzen, CrossFire, RDNA and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies.

DirectX is a registered trademark of Microsoft Corporation in the US and other jurisdictions.

Vulkan and the Vulkan logo are registered trademarks of the Khronos Group Inc.

OpenCL is a trademark of Apple Inc. used by permission by Khronos Group, Inc.

Microsoft is a registered trademark of Microsoft Corporation in the US and other jurisdictions.

Windows is a registered trademark of Microsoft Corporation in the US and other jurisdictions.

© 2016-2023 Advanced Micro Devices, Inc. All rights reserved.