Rob Clark: freedreno update: moar fps!

移动互联 2013-09-15


Now that msm drm/kms kernel driver is merged upstream, I've spent the last few weeks on a bit of a debugging / fixing spree. (Yes, an odd way to start a post about performance/profiling.) I added proper support for mipmaps/cubemaps/etc (multi-slice resources), killed a few gpu lockup bugs, installed a bunch of games and went looking for and fixing rendering issues. I've put together a
status
table on the freedreno wiki.





In the process, I noticed some games, such as
supertuxkart
, which had low fps, also also had unusually low gpu utilization (30-50%). Now, a new graphics driver stack will always have lots of room for optimization (which is certainly true of freedreno). The key is to know which optimization to work on first. It does no good to make the shader compiler generate 2x faster shaders (which I think is currently possible) if that is just going to take you from 30-50% utilization to 15-25% utilization at roughly the same fps. So before we get to the fun optimizations, we need to take care of any of the cpu side bottlenecks in the driver.





Now the linux
perf tool
is pretty nice just for identifying purely cpu bottlenecks. In fact it showed me pretty quickly that the upstream IOMMU framework struggles with gpu type workloads. Mapping/unmapping individual pages is not really the way to do it. On the downstream msm-3.4 based android kernel, we have
iommu_map_range()
and



iommu_unmap_range()

[
1
]... using these instead is worth 2-3 fps in xonotic, and probably more in supertuxkart, but we'll come back to that.



But perf tool does not really help much with gpu or cpu/gpu interactions, at least not by itself. So, first I added some
trace points
in the kernel drm/kms driver.. in particular, I put tracepoints:

  1. tracing the fence # when work is submitted to the gpu, and when we get the completion interrupt.
  2. tracing the fence # when cpu waits on a fence and when it finishes waiting
  3. and when pageflip is requested and when it completes (after rendering completes and after vsync)


And then I
hacked up
the perf timechart tool to display gpu information in the timechart, for a nice timeline overview. Currently I have it looking for the msm trace events, but I think that it would be useful to have a small set of generic trace events which all the drm drivers can use, so that tools won't have to be looking for driver specific traces. I think what I have is a reasonable start, but probably needs a bit of work to handle gpu's that have multiple rings, etc.





With that, I fired up supertuxkart again (in demo mode so it will drive itself), and then
perf timechart record
for a couple seconds to capture a short trace:









You can see above, there is a new bar at the top, below the cpu bars, for the gpu, showing when the gpu is active. And a green overlay bar on the gpu showing where pageflip has been requested (typically right after rendering submitted), and when pageflip completes (next vblank after rendering completes. And below, in the per-process bars, a yellow overlay marker when the process is pending on a fence (waiting for some gpu rendering to complete).




And immediately we can see see that that the bottleneck is a fence that supertuxkart is stalling on before it is able to submit rendering for the next frame. After a little bit of poking, I realized that I should implement support for
PIPE_TRANSFER_DISCARD_WHOLE_RESOURCE
in the freedreno gallium driver. If this usage bit is set, it is a hint to the gallium driver that the previous buffer contents do not need to be preserved after the upload. So in cases that the backing gem buffer object (bo) is still busy (referenced by previous rendering which is not yet complete), it is better to just delete the bo and create a new one, rather than stalling the cpu. The drm driver holds a ref for bo's that are associated to gpu rendering which has not yet completed, so the pages for the old bo don't go away until the gpu is finished with them.


With this change, things have improved, but there is still a bottleneck:



(note that the timescale differs between these three timecharts, since the capture duration differed)



Oddly we see a lot of activity on kworker (workqueue worker thread in the kernel). This is mainly retire_worker, in particular releasing the reference that the driver holds to bo's for rendering which is now completed. After a bit more digging, it turns out that supertuxkart is creating on the order of 150-200 transient buffers per frame. Unref'ing these, unmapping from IOMMU and cpu, and deleting backing pages for that many buffers takes some time. Even with some optimization in the kernel, there is still going to be a lot of overhead in the associated vma setup/teardown (since many of these buffers are used for vertex/attribute upload, and will need to be mmap'd), zeroing out pages before the next allocation, etc.



So borrowing an idea from i915, I implemented a bo cache in userspace, in libdrm_freedreno. On new allocations, we round up to the next bucket size, and if there is a unused buffer in the bucket cache which is not still busy, we take that buffer instead of allocating a new one. (If I add a BO_FOR_RENDERING flag, like i915, I could take a still-busy gem bo for cases where I know cpu access will not be needed... by the time the gpu starts writing to the buffer, it will be no longer busy.)



With this, things look much better:







As you can see, the gpu is nearly continuously occupied. And a nice benefit is a drop in cpu utilization. To do this properly, I need to add a MADVISE style ioctl in msm drm/kms driver, so userspace can advise the kernel that it is keeping a bo around in a cache, and that the kernel is free to free the backing pages under memory pressure, tear down the cpu mapping, etc. This will prevent the wrath of the OOM killer 🙂

So now with the bottlenecks in the driver worked out, future work to make the gpu render faster (ie, hw binning pass, shader compiler optimizations, etc) will actually bring a meaningful benefit.

Notes:


[
1
] just fwiw, the ideal IOMMU API would give me a way to make multiple map/unmap updates without tlb/etc flush. This should be even better than the map/unmap_range variants. I know when I'm submitting rendering jobs which reference the buffers to the GPU, so I have good points for a batch IOMMU update flush.













您可能感兴趣的

投行说所有人都严重低估了AI 英伟达股价应声暴... *本文来自华尔街见闻(微信ID:wallstreetcn),作者张丹丹。 周五,投行Evercore向美股投掷了一颗小型“炸弹”:将英伟达目标股价由180美元大幅上调至250美元,远远超过此前华尔街给予英伟达的最高目标价——200美元。 截至周五美股收盘,英伟达大涨逾6%创收盘纪录新高,报1...
AMD突然发布7nm芯片背后的野心 | 半导体行业观察... 来源:微信公众号 半导体行业观察(ID:icbank) 在日前举行的Computex 2018媒体发布会上,AMD有些出人意料地进行了高规格的产品发布,公开的产品包括下一代使用7nm工艺的VEGA GPU,以及使用7nm的Zen 2处理器。目前,7nm VEGA GPU是全球第一...
Amazon supercharges GPU power, spits out Nvidia-ba... Amazon has rolled out its latest GPU computing box instance line, G3. It comes in three flavours: g3.4xlarge (1 GPU), g3.8xlarge (2 GPUs), and g3.16...
英伟达的业绩优于第二季度,该公司在整个平台上实现了全面增长... 周四,英伟达发布了第二季度财报,轻松超出市场预期,在所有平台上都实现了全面增长。 这家半导体巨头公布的营收为22.3亿美元,非一般公认会计准则为1.01美元。华尔街希望每股收益70美分,营收19.6亿美元。 一年前,这家位于加州圣克拉拉的公司公布了非通用会计准则每股收益53美分,营收1...
上市公司感受到加密货币挖矿热潮 加密货币可能再次成为关注热点,大多数上市公司也对该技术会与自己业务相关的方面抱有兴趣。 然而,对于科技巨头Nvidia和Advanced Micro Devices(AMD)来说,情况则有所不同。 今年到目前为止,加密货币矿工已经占领了GPU市场,它们使用这些设备来解决以太坊...