技术控

    今日:10| 主题:49195
收藏本版 (1)
最新软件应用技术尽在掌握

[其他] Optimizing parallel HTTP request batches

[复制链接]
閉上眼ゝ說再見 发表于 2016-10-3 16:27:24
125 0

立即注册CoLaBug.com会员,免费获得投稿人的专业资料,享用更多功能,玩转个人品牌!

您需要 登录 才可以下载或查看,没有帐号?立即注册

x
Table of Contents

      Intro

      While working on a tech blog post and tech eventsearch engine, I noticed that fetching RSS or Atom feeds is starting to take a longer time, so I thought about a way of speeding up that operation since there are plans to scale collection at some point.      
      This post will describe a way of optimizing the process of fetching blog post feeds.
            Overview

          This query will sort requests by expected duration (response time). The output will be an ordering of the requests to make. First, all of those that we have timing data for will be sorted by duration. Then we'll have the ones we don't have timing data for, and those can be run in any order.
      We split the requests in batches. Inside each batch, all the urls are fetched in parallel. So an estimate for the running time of a batch is the maximum response time in that batch. The batches themselves are run sequentially.
      The aim of this is to not let urls with slow response times slow down the entire batch.
      This is a small example with batch size 3.
     
  1. unordered
  2. [3,10,4] ; [13,9,2] ; [11,20,8]
  3. total time: 10 + 13 + 20 = 43
复制代码
     A timeline visualization of these response times
     
  1. ordered
  2. [2,3,4]  ; [8,9,10] ; [11,13,20]
  3. total time:  4 + 10 + 20 = 34
复制代码
     Another timeline visualization, but this time the requests are ordered by the expected response time and afterwards, the batches are built.
      So the difference between these two timelines can be seen above, we save 9 seconds due to reordering.
      After implementing this new logic, the running time for fetching 1597 urls was decreased from 24 min 25 seconds to 20 min 17 seconds.
            Implementation

          In the example below, we use wCTE to gradually refine our results. First we get the urls which are passed as a parameter when the query is executed. These are the urls which we plan to make requests to.
      Then we compute average response time based on previous measurements since the collector measures and stores such information on every run in the        entry_timingstable.      
      After we're done with this, we'll use a        JOINto find out which feed urls we already know the response times to, and we sort them in ascending order by those times.      
      If we find feeds that we have no timing information for, we will allow those to be run at the end of the collection process.
     
  1. WITH planned_urls AS (
  2.     -- all requests we plan on making
  3.     SELECT
  4.     unnest(%(urls)s) AS url
  5. ), timing_urls AS (
  6.     -- urls that we have timing data for.
  7.     -- get duration for each url in average milliseconds
  8.     SELECT
  9.     feed_url,
  10.     (AVG(EXTRACT(EPOCH FROM a.end_time - a.start_time)*1000))::integer AS duration
  11.     FROM entry_timing a
  12.     WHERE a.end_time IS NOT NULL AND a.start_time IS NOT NULL
  13.     GROUP BY a.feed_url
  14. ), known_urls AS (
  15.     -- requests where we have timing data.
  16.     -- we sort these in ascending order of their expected
  17.     -- duration.
  18.     SELECT
  19.     b.feed_url,
  20.     b.duration
  21.     FROM planned_urls a
  22.     JOIN timing_urls  b ON a.url = b.feed_url
  23.     ORDER BY b.duration
  24. ), unknown_urls AS (
  25.     -- requests where we don't have timing data.
  26.     -- we can't say anything about these so the order
  27.     -- won't matter here.
  28.     SELECT
  29.     a.url,
  30.     9999999 as duration
  31.     FROM planned_urls a
  32.     LEFT JOIN timing_urls b ON a.url = b.feed_url
  33.     WHERE b.feed_url IS NULL
  34. ), all_reordered AS (
  35.     SELECT * FROM known_urls
  36.     UNION ALL
  37.     SELECT * FROM unknown_urls
  38. )
  39. SELECT
  40. *, rank() over(ORDER BY duration ASC) AS ordinal
  41. FROM all_reordered;
复制代码
           Conclusion

    We've seen a way of improving run time of HTTP requests run in batches by reordering and building the batches using stored timing information from previous runs.
友荐云推荐




上一篇:CSS 3D应该注意的事项
下一篇:Trace - adds logging and metrics to net trace
酷辣虫提示酷辣虫禁止发表任何与中华人民共和国法律有抵触的内容!所有内容由用户发布,并不代表酷辣虫的观点,酷辣虫无法对用户发布内容真实性提供任何的保证,请自行验证并承担风险与后果。如您有版权、违规等问题,请通过"联系我们"或"违规举报"告知我们处理。

*滑动验证:
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

我要投稿

推荐阅读

扫码访问 @iTTTTT瑞翔 的微博
回页顶回复上一篇下一篇回列表手机版
手机版/CoLaBug.com ( 粤ICP备05003221号 | 文网文[2010]257号 )|网站地图 酷辣虫

© 2001-2016 Comsenz Inc. Design: Dean. DiscuzFans.

返回顶部 返回列表