如何获取我的整个 YouTube 观看历史记录?

2022-01-19 00:00:00 youtube-api javascript

我正在尝试在我的 YouTube API 应用程序中获取给定用户观看视频的完整列表.我想把所有视频的总时长加起来.

I'm trying to get a full list of watched videos for a given user in my YouTube API application. I want to add up total duration of all videos.

当我从历史播放列表中获取视频列表时,API 将其限制为 50 个项目.有分页,但项目总数为 50(不仅仅是每页);我无法使用它出现的 API 访问更多数据.

When I get the list of videos from history playlist, the API caps it at 50 items. There's pagination but total amount of items is 50 (not just per page); I can't access more data with the API it appears.

有没有什么方法可以在没有数据上限的情况下获得这个播放列表?我希望有另一种方法(使用 API)或没有 API 的方法.我知道 YouTube 会存储这些数据,因为我可以查看我的全部历史记录(超过 50 个视频).

Is there any way I can get this playlist without the data cap? I'm hoping for another method (of using the API) or a way to do it without the API. I know YouTube stores this data because I can view my entire history (far more that 50 videos).

我正在使用此代码:

var requestOptions = {
    playlistId: playlistId,
    part: 'snippet',
    maxResults: 50
};
gapi.client.youtube.playlistItems.list(requestOptions);

其中 playlistId 是我从 gapi.client.youtube.channels.list 请求中获得的历史播放列表的 ID.

where playlistId is the id of the history playlist I got from a gapi.client.youtube.channels.list request.

编辑(2017 年):我想澄清一下,我一直打算下载我自己的历史记录,只是想看看我花了多少时间观看视频.我仍然无法做到这一点.

Edit (2017): I want to clarify that it was always my intention to download my own history, just out of interest to see how much time I have spent watching videos. I still have not been able to do this.

推荐答案

前段时间我为这个任务写了一个爬虫(在 Python 2.7(更新为 3.5)和 Scrapy 中).没有官方 API,它使用登录会话 cookie 和 html 解析.默认转储到 SQLite.https://github.com/zvodd/Youtube-Watch-History-Scraper

I wrote a scraper(in Python 2.7(updated for 3.5) and Scrapy) for this task a while ago. Sans official API, it uses a logged in session cookie and html parsing. Dumps to SQLite by default. https://github.com/zvodd/Youtube-Watch-History-Scraper

它是如何完成的:本质上它打开了 url

How it's done: essentially it opens the url

https://www.youtube.com/feed/history'

使用从 Chrome 获取的有效(登录)会话 cookie.抓取所有视频条目的名称、vid(url)、频道/用户、描述、长度.然后它会在页面底部找到具有 data-uix-load-more-href 属性的按钮,其中包含指向下一页的链接,例如:

with a valid(logged in) session cookie taken from Chrome. Scrapes all video entries for name, vid(url), channel/user, description, length. Then it finds the button at the bottom of the page with the attribute data-uix-load-more-href which contains the link to the next page, something like:

"/browse_ajax?action_continuation=1&continuation=98h32hfoasau0fu928hf2hf908h98hr%253D%253D&target_id=item-section-552363&direct_render=1"

... 从那里重新抓取视频条目并将它们全部转储到 sqlite 数据库中;您可以按任何字段(名称、长度、用户、描述等)搜索条目.

... re-scrapes the video entries from there and dumps them all into an sqlite database; which you can search entries by any of the fields (name, length, user, description, etc).

因此,在他们更改他们的提要/历史页面之前,它是可行的并且已经完成.我什至可能会更新它.

So until they change their feed/history page, it's doable and done. I might even update it.

相关文章