学术报告 REPORT
    2018年6月1日—2018年6月2日
    地点:北京清华大学
    详细

    学术报告

    An Asymptotically Tight Learning Algorithm for Mobile-Promotion Platforms

    返回列表



    【主讲】齐安焱,副教授,德克萨斯大学达拉斯分校纳文金达尔管理学院

    【主题】An Asymptotically Tight Learning Algorithm for Mobile-Promotion Platforms

    【时间】2022 3 7 日(周一) 16:00-18:00

    【地点】伟伦453

    【语言】英语

    【主办】管理科学与工程系


    SpeakerAnyan QiAssociate Professor of Operations Management at Naveen Jin dal School of Management, the University of Texas at Dallas

    TopicAn Asymptotically Tight Learning Algorithm for Mobile-Promotion Platforms

    TimeMonday, March 7, 2022, 16:00 – 18:00

    VenueWeilun 453

    LanguageEnglish

    OrganizerDepartment of Management Science and Engineering

    Abstract: Operating under both supply-side and demand-side uncertainties, a mobile-promotion platform conducts advertising campaigns for individual advertisers. Campaigns arrive dynamically over time, which is divided into seasons; each campaign requires the platform to deliver a target number of mobile impressions from a desired set of locations over a desired time interval. The platform fulfills these campaigns by procuring impressions from publishers, who supply advertising space on apps, via real-time bidding on ad exchanges. Each location is characterized by its win curve, i.e., the relationship between the bid price and the probability of winning an impression at that bid. The win curves at the various locations of interest are initially unknown to the platform, and it learns them on the fly based on the bids it places to win impressions and the realized outcomes. Each acquired impression is allocated to one of the ongoing campaigns. The platform's objective is to minimize its total cost (the amount spent in procuring impressions and the penalty incurred due to unmet targets of the campaigns) over the time horizon of interest. Our main result is a bidding and allocation policy for this problem. We show that our policy is the best possible (asymptotically tight) for the problem using the notion of regret under a policy, namely the difference between the expected total cost under that policy and the optimal cost for the clairvoyant problem (i.e., one in which the platform has full information about the win curves at all the locations in advance): The regret under any policy is Ω(√𝐼), where I is the number of seasons, and that under our policy is Ο(√𝐼). We demonstrate the performance of our policy through numerical experiments on a test bed of instances whose input parameters are based on our observations at a real-world mobile-promotion platform.


    Bio: Anyan Qi is an Associate Professor of Operations Management at Naveen Jindal School of Management, the University of Texas at Dallas. He earned a Ph.D. degree in Technology and Operations from the Ross School of Business, University of Michigan, a Bachelor’s degree in Automation from the School of Information Science and Technology, and a Bachelor’s degree in Economics from the School of Economics and Management in Tsinghua University.

    His works have been published in Management Science, Operations Research, Manufacturing & Service Operations Management, and Production and Operations Management. His papers have been recognized in multiple paper competitions. He serves as a Senior Editor for Production and Operations Management. He also serves as a referee for leading journals, including Management Science, Operations Research, Manufacturing & Service Operations Management, Production and Operations Management, and Strategic Management Journal. He has received the Management Science Distinguished Service Award three times and M&SOM Meritorious Service Award four times.